Within the framework of the general theory of relativity (GR) the modeling of the central symmetrical
gravitational field is considered. The mapping of the geodesic motion of the Lemetr and Tolman basis on
their motion in the Minkowski space on the world lines is determined. The expression for the field intensity
and energy where these bases move is obtained. The advantage coordinate system is found, the coordinates
and the time of the system coincide with the Galilean coordinates and the time in the Minkowski space.
Vector Analysis at Undergraduate in Science (Math, Physics, Engineering) level. The presentation gives a general description of the subject.
Please send comments and suggestions to solo.hermelin@gmail.com, thanks. For more presentations, please visit my website at
http://www.solohermelin.com .
Within the framework of the general theory of relativity (GR) the modeling of the central symmetrical
gravitational field is considered. The mapping of the geodesic motion of the Lemetr and Tolman basis on
their motion in the Minkowski space on the world lines is determined. The expression for the field intensity
and energy where these bases move is obtained. The advantage coordinate system is found, the coordinates
and the time of the system coincide with the Galilean coordinates and the time in the Minkowski space.
Vector Analysis at Undergraduate in Science (Math, Physics, Engineering) level. The presentation gives a general description of the subject.
Please send comments and suggestions to solo.hermelin@gmail.com, thanks. For more presentations, please visit my website at
http://www.solohermelin.com .
We compare different methods (QMC, PCE, gradient based) to quantify uncertainty in random geometry of airfoil profile. The uncertain output is the lift and the drag coefficients (mean, variance and exceedance probabilities)
Ekeeda Provides Online Civil Engineering Degree Subjects Courses, Video Lectures for All Engineering Universities. Video Tutorials Covers Subjects of Mechanical Engineering Degree.
We compare different methods (QMC, PCE, gradient based) to quantify uncertainty in random geometry of airfoil profile. The uncertain output is the lift and the drag coefficients (mean, variance and exceedance probabilities)
Ekeeda Provides Online Civil Engineering Degree Subjects Courses, Video Lectures for All Engineering Universities. Video Tutorials Covers Subjects of Mechanical Engineering Degree.
This talk will review dynamic modeling and prediction for temporal and spatio-temporal data and describe algorithms for suitable state space models. Use of dynamic models for modeling crash types by severity will be briefly illustrated. Extension of these approaches for handling irregular temporal spacing and spatial sparseness will be discussed, and a potential application to travel time prediction will be explored.
As we prepare for a future of driverless cars, what new risks must we work to understand? Despite the connotation of driverless, we can expect that humans will remain in the loop at each iteration of increasingly autonomous technology integration. While our technology is advancing, our population and economics are also transitioning to present challenging paradigm shifts that we should account for in assessing the risks of driverless cars. Let us take this holistic systems engineering approach to exploring transportation at the Statistical and Applied Mathematical Sciences Institute.
As the analysis by Kalra & Paddock (2016) demonstrated, traditional crash data and analysis approaches may require hundreds of millions or billions of self-driving miles to achieve sufficient power to demonstrate that automated vehicles (AVs) have lower injury/fatality risk than human-driven vehicles. Moreover, crash risk for AVs is a moving target as algorithms and systems change, and the mistakes AVs will make are not necessarily the same mistakes humans make. Thus, we need to rethink both the data that will make up transportation safety datasets in the near future as well as the analytical approaches used. I will present some newer data-collection approaches along with some specific challenges that might call for different analytical approaches than are being used for crash data today.
Crashes on limited access roadways typically occur due to drivers being unable to react in time to avoid collisions with vehicles ahead of them either moving slower or merging
unexpectedly. Prevailing traffic stream conditions with high volume and low or variable speed downstream of low volume and high speed conditions can increase the possibilities for such collisions to occur. Real time trajectories of vehicles collected through crowd sourcing methods can give information about the distribution of speeds in the traffic stream by space
and time. Spatio-temporal models relating these observed speed distributions to the occurrence of crashes or near crashes can help to identify crash prone traffic conditions as
they arise, offering the opportunity to warn drivers before crashes occur.
Highway crash data with average of 39 thousand fatalities and 2.4 million nonfatal injuries per year have repetitive and predictable patterns, and may benefit from statistical predictive
models to enhance highway safety and operation efforts to reduce crash fatalities/injuries. Highway crashes have patterns that repeat over fixed periods of time within the data set for
crashes such as motorcycle, bicycles, pedestrians, nighttime, fixed object, weekend, and winter crashes. In some States, these crashes are weekly, monthly, or seasonally. Contributing
factors such as: age category, light condition, weather, weekday, underlying state of the economy, and others impact these variations.
Remote-sensing data offer unprecedented opportunities to address Earth-system-science challenges, such as understanding the relationship between the atmosphere and Earth's surface using physics, chemistry, biology, mathematics, and computing. Statistical methods have often been seen as a hybrid of the latter two, so that a lot of attention has been given to computing estimates but far less to quantifying the uncertainty of the estimates. In my "bird's-eye view," I shall give a way to look at the problem using conditional probability models and three states of knowledge. Examples will be given of analyzing remotely sensed data of a leading greenhouse gas, carbon dioxide.
The melting of the West Antarctic ice sheet (WAIS) is likely to cause a significant rise in sea levels. Studying the present state of WAIS and predicting its future behavior involves the use of computer models of ice sheet dynamics as well as observational data. I will outline general statistical challenges posed by these scientific questions and data sets.
This discussion is based on joint work with Yawen Guan (Penn State/SAMSI), Won Chang (U. of Cincinnati), Patrick Applegate, David Pollard (Penn State)
Climate change could have far-reaching consequences for human health across the 21st century. At the same time, development choices will alter underlying vulnerability to these risks, affecting the magnitude and pattern of impacts. The current and projected human health risks of climate change are diverse and wide-ranging, potentially altering the burden of any health outcome sensitive to weather or climate. Climate variability and change can affect morbidity and mortality from extreme weather and climate events, and from changes in air quality arising from changing concentrations of ozone, particulate matter, or aeroallergens. Altering weather patterns and sea level rise also may facilitate changes in the geographic range, seasonality, and incidence of selected infectious diseases in some regions, such as malaria moving into highland areas in parts of sub-Saharan Africa. Changes in water availability and agricultural productivity could affect undernutrition, particularly in parts of Asia and Africa. These risks are not independent, but will interact in complex ways with risks in other sectors. Policies and programs need to explicitly take climate change into account to facilitate sustainable and resilient societies that effectively prepare for, manage, and recover from climate-related hazards.
Highlights topics of discussion on remote sensing during Day 1 of Program on Mathematical and Statistical Methods for Climate and the Earth System Opening Workshop.
Climate Science presents several data intensive challenges that are the intersection of software architecture and data science. This includes developing approaches for scaling the analysis of highly distributed data across institutional and system boundaries. JPL has been developing approaches for quantitatively evaluating software architectures to consider different topologies in the deployment of computing capabilities and methodologies in order to support the analysis of distributed climate data. This talk will cover those approaches and also needed research in new methodologies as remote sensing and climate model output data continue to increase in their size and distribution.
Verification of Newton’s Law of Motion by Atwood Machine.AbdulMubinBiswas
The Atwood machine is a simple apparatus used to verify Newton's laws of motion, particularly the second law, which states that the force acting on an object is equal to the mass of the object multiplied by its acceleration (F = ma). The Atwood machine consists of a pulley with a massless and frictionless axle, and a string that passes over the pulley with two masses hanging on either side.
To verify Newton's second law using the Atwood machine, two unequal masses,
m
1
and
m
2
, are attached to the ends of the string. The difference in mass between the two objects creates a net force, causing them to accelerate in the direction of the larger mass. The system is set up in such a way that the acceleration can be measured and the force acting on each mass can be calculated.
The Atwood machine provides an opportunity to analyze the forces acting on the system. The gravitational force acting on each mass creates tension in the string. By measuring the acceleration and applying Newton's second law, the experimental verification of the law can be achieved. The setup allows for the consideration of frictional forces and the effects of air resistance, making it a useful tool for practical physics experiments.
In summary, the Atwood machine serves as a valuable experimental tool to demonstrate and verify Newton's second law of motion by observing the motion of masses connected by a string over a pulley, taking into account gravitational forces and acceleration.
Remote-sensing instruments have enabled the collection of big spatial data over large spatial domains such as entire continents or the globe. Basis-function representations are well suited to big spatial data, as they can enable fast computations for large datasets and they provide flexibility to deal with the complicated dependence structures often encountered over large domains. We propose two related multi-resolution approximations (MRAs) that use basis
functions at multiple resolutions to (approximately) represent any covariance structure. The first MRA results in a multi-resolution taper that can deal with large spatial datasets. The second MRA is based on a multi-resolution partitioning of the spatial domain and can deal with truly massive datasets, as it is highly scalable and amenable to parallel computations on distributed computing systems.
In this talk we consider the question of how to use QMC with an empirical dataset, such as a set of points generated by MCMC. Using ideas from partitioning for parallel computing, we apply recursive bisection to reorder the points, and then interleave the bits of the QMC coordinates to select the appropriate point from the dataset. Numerical tests show that in the case of known distributions this is almost as effective as applying QMC directly to the original distribution. The same recursive bisection can also be used to thin the dataset, by recursively bisecting down to many small subsets of points, and then randomly selecting one point from each subset. This makes it possible to reduce the size of the dataset greatly without significantly increasing the overall error. Co-author: Fei Xie
Recently, the machine learning community has expressed strong interest in applying latent variable modeling strategies to causal inference problems with unobserved confounding. Here, I discuss one of the big debates that occurred over the past year, and how we can move forward. I will focus specifically on the failure of point identification in this setting, and discuss how this can be used to design flexible sensitivity analyses that cleanly separate identified and unidentified components of the causal model.
I will discuss paradigmatic statistical models of inference and learning from high dimensional data, such as sparse PCA and the perceptron neural network, in the sub-linear sparsity regime. In this limit the underlying hidden signal, i.e., the low-rank matrix in PCA or the neural network weights, has a number of non-zero components that scales sub-linearly with the total dimension of the vector. I will provide explicit low-dimensional variational formulas for the asymptotic mutual information between the signal and the data in suitable sparse limits. In the setting of support recovery these formulas imply sharp 0-1 phase transitions for the asymptotic minimum mean-square-error (or generalization error in the neural network setting). A similar phase transition was analyzed recently in the context of sparse high-dimensional linear regression by Reeves et al.
Many different measurement techniques are used to record neural activity in the brains of different organisms, including fMRI, EEG, MEG, lightsheet microscopy and direct recordings with electrodes. Each of these measurement modes have their advantages and disadvantages concerning the resolution of the data in space and time, the directness of measurement of the neural activity and which organisms they can be applied to. For some of these modes and for some organisms, significant amounts of data are now available in large standardized open-source datasets. I will report on our efforts to apply causal discovery algorithms to, among others, fMRI data from the Human Connectome Project, and to lightsheet microscopy data from zebrafish larvae. In particular, I will focus on the challenges we have faced both in terms of the nature of the data and the computational features of the discovery algorithms, as well as the modeling of experimental interventions.
Bayesian Additive Regression Trees (BART) has been shown to be an effective framework for modeling nonlinear regression functions, with strong predictive performance in a variety of contexts. The BART prior over a regression function is defined by independent prior distributions on tree structure and leaf or end-node parameters. In observational data settings, Bayesian Causal Forests (BCF) has successfully adapted BART for estimating heterogeneous treatment effects, particularly in cases where standard methods yield biased estimates due to strong confounding.
We introduce BART with Targeted Smoothing, an extension which induces smoothness over a single covariate by replacing independent Gaussian leaf priors with smooth functions. We then introduce a new version of the Bayesian Causal Forest prior, which incorporates targeted smoothing for modeling heterogeneous treatment effects which vary smoothly over a target covariate. We demonstrate the utility of this approach by applying our model to a timely women's health and policy problem: comparing two dosing regimens for an early medical abortion protocol, where the outcome of interest is the probability of a successful early medical abortion procedure at varying gestational ages, conditional on patient covariates. We discuss the benefits of this approach in other women’s health and obstetrics modeling problems where gestational age is a typical covariate.
Difference-in-differences is a widely used evaluation strategy that draws causal inference from observational panel data. Its causal identification relies on the assumption of parallel trends, which is scale-dependent and may be questionable in some applications. A common alternative is a regression model that adjusts for the lagged dependent variable, which rests on the assumption of ignorability conditional on past outcomes. In the context of linear models, Angrist and Pischke (2009) show that the difference-in-differences and lagged-dependent-variable regression estimates have a bracketing relationship. Namely, for a true positive effect, if ignorability is correct, then mistakenly assuming parallel trends will overestimate the effect; in contrast, if the parallel trends assumption is correct, then mistakenly assuming ignorability will underestimate the effect. We show that the same bracketing relationship holds in general nonparametric (model-free) settings. We also extend the result to semiparametric estimation based on inverse probability weighting.
We develop sensitivity analyses for weak nulls in matched observational studies while allowing unit-level treatment effects to vary. In contrast to randomized experiments and paired observational studies, we show for general matched designs that over a large class of test statistics, any valid sensitivity analysis for the weak null must be unnecessarily conservative if Fisher's sharp null of no treatment effect for any individual also holds. We present a sensitivity analysis valid for the weak null, and illustrate why it is conservative if the sharp null holds through connections to inverse probability weighted estimators. An alternative procedure is presented that is asymptotically sharp if treatment effects are constant, and is valid for the weak null under additional assumptions which may be deemed reasonable by practitioners. The methods may be applied to matched observational studies constructed using any optimal without-replacement matching algorithm, allowing practitioners to assess robustness to hidden bias while allowing for treatment effect heterogeneity.
The world of health care is full of policy interventions: a state expands eligibility rules for its Medicaid program, a medical society changes its recommendations for screening frequency, a hospital implements a new care coordination program. After a policy change, we often want to know, “Did it work?” This is a causal question; we want to know whether the policy CAUSED outcomes to change. One popular way of estimating causal effects of policy interventions is a difference-in-differences study. In this controlled pre-post design, we measure the change in outcomes of people who are exposed to the new policy, comparing average outcomes before and after the policy is implemented. We contrast that change to the change over the same time period in people who were not exposed to the new policy. The differential change in the treated group’s outcomes, compared to the change in the comparison group’s outcomes, may be interpreted as the causal effect of the policy. To do so, we must assume that the comparison group’s outcome change is a good proxy for the treated group’s (counterfactual) outcome change in the absence of the policy. This conceptual simplicity and wide applicability in policy settings makes difference-in-differences an appealing study design. However, the apparent simplicity belies a thicket of conceptual, causal, and statistical complexity. In this talk, I will introduce the fundamentals of difference-in-differences studies and discuss recent innovations including key assumptions and ways to assess their plausibility, estimation, inference, and robustness checks.
We present recent advances and statistical developments for evaluating Dynamic Treatment Regimes (DTR), which allow the treatment to be dynamically tailored according to evolving subject-level data. Identification of an optimal DTR is a key component for precision medicine and personalized health care. Specific topics covered in this talk include several recent projects with robust and flexible methods developed for the above research area. We will first introduce a dynamic statistical learning method, adaptive contrast weighted learning (ACWL), which combines doubly robust semiparametric regression estimators with flexible machine learning methods. We will further develop a tree-based reinforcement learning (T-RL) method, which builds an unsupervised decision tree that maintains the nature of batch-mode reinforcement learning. Unlike ACWL, T-RL handles the optimization problem with multiple treatment comparisons directly through a purity measure constructed with augmented inverse probability weighted estimators. T-RL is robust, efficient and easy to interpret for the identification of optimal DTRs. However, ACWL seems more robust against tree-type misspecification than T-RL when the true optimal DTR is non-tree-type. At the end of this talk, we will also present a new Stochastic-Tree Search method called ST-RL for evaluating optimal DTRs.
A fundamental feature of evaluating causal health effects of air quality regulations is that air pollution moves through space, rendering health outcomes at a particular population location dependent upon regulatory actions taken at multiple, possibly distant, pollution sources. Motivated by studies of the public-health impacts of power plant regulations in the U.S., this talk introduces the novel setting of bipartite causal inference with interference, which arises when 1) treatments are defined on observational units that are distinct from those at which outcomes are measured and 2) there is interference between units in the sense that outcomes for some units depend on the treatments assigned to many other units. Interference in this setting arises due to complex exposure patterns dictated by physical-chemical atmospheric processes of pollution transport, with intervention effects framed as propagating across a bipartite network of power plants and residential zip codes. New causal estimands are introduced for the bipartite setting, along with an estimation approach based on generalized propensity scores for treatments on a network. The new methods are deployed to estimate how emission-reduction technologies implemented at coal-fired power plants causally affect health outcomes among Medicare beneficiaries in the U.S.
Laine Thomas presented information about how causal inference is being used to determine the cost/benefit of the two most common surgical surgical treatments for women - hysterectomy and myomectomy.
We provide an overview of some recent developments in machine learning tools for dynamic treatment regime discovery in precision medicine. The first development is a new off-policy reinforcement learning tool for continual learning in mobile health to enable patients with type 1 diabetes to exercise safely. The second development is a new inverse reinforcement learning tools which enables use of observational data to learn how clinicians balance competing priorities for treating depression and mania in patients with bipolar disorder. Both practical and technical challenges are discussed.
The method of differences-in-differences (DID) is widely used to estimate causal effects. The primary advantage of DID is that it can account for time-invariant bias from unobserved confounders. However, the standard DID estimator will be biased if there is an interaction between history in the after period and the groups. That is, bias will be present if an event besides the treatment occurs at the same time and affects the treated group in a differential fashion. We present a method of bounds based on DID that accounts for an unmeasured confounder that has a differential effect in the post-treatment time period. These DID bracketing bounds are simple to implement and only require partitioning the controls into two separate groups. We also develop two key extensions for DID bracketing bounds. First, we develop a new falsification test to probe the key assumption that is necessary for the bounds estimator to provide consistent estimates of the treatment effect. Next, we develop a method of sensitivity analysis that adjusts the bounds for possible bias based on differences between the treated and control units from the pretreatment period. We apply these DID bracketing bounds and the new methods we develop to an application on the effect of voter identification laws on turnout. Specifically, we focus estimating whether the enactment of voter identification laws in Georgia and Indiana had an effect on voter turnout.
We study experimental design in large-scale stochastic systems with substantial uncertainty and structured cross-unit interference. We consider the problem of a platform that seeks to optimize supply-side payments p in a centralized marketplace where different suppliers interact via their effects on the overall supply-demand equilibrium, and propose a class of local experimentation schemes that can be used to optimize these payments without perturbing the overall market equilibrium. We show that, as the system size grows, our scheme can estimate the gradient of the platform’s utility with respect to p while perturbing the overall market equilibrium by only a vanishingly small amount. We can then use these gradient estimates to optimize p via any stochastic first-order optimization method. These results stem from the insight that, while the system involves a large number of interacting units, any interference can only be channeled through a small number of key statistics, and this structure allows us to accurately predict feedback effects that arise from global system changes using only information collected while remaining in equilibrium.
We discuss a general roadmap for generating causal inference based on observational studies used to general real world evidence. We review targeted minimum loss estimation (TMLE), which provides a general template for the construction of asymptotically efficient plug-in estimators of a target estimand for realistic (i.e, infinite dimensional) statistical models. TMLE is a two stage procedure that first involves using ensemble machine learning termed super-learning to estimate the relevant stochastic relations between the treatment, censoring, covariates and outcome of interest. The super-learner allows one to fully utilize all the advances in machine learning (in addition to more conventional parametric model based estimators) to build a single most powerful ensemble machine learning algorithm. We present Highly Adaptive Lasso as an important machine learning algorithm to include.
In the second step, the TMLE involves maximizing a parametric likelihood along a so-called least favorable parametric model through the super-learner fit of the relevant stochastic relations in the observed data. This second step bridges the state of the art in machine learning to estimators of target estimands for which statistical inference is available (i.e, confidence intervals, p-values etc). We also review recent advances in collaborative TMLE in which the fit of the treatment and censoring mechanism is tailored w.r.t. performance of TMLE. We also discuss asymptotically valid bootstrap based inference. Simulations and data analyses are provided as demonstrations.
We describe different approaches for specifying models and prior distributions for estimating heterogeneous treatment effects using Bayesian nonparametric models. We make an affirmative case for direct, informative (or partially informative) prior distributions on heterogeneous treatment effects, especially when treatment effect size and treatment effect variation is small relative to other sources of variability. We also consider how to provide scientifically meaningful summaries of complicated, high-dimensional posterior distributions over heterogeneous treatment effects with appropriate measures of uncertainty.
Climate change mitigation has traditionally been analyzed as some version of a public goods game (PGG) in which a group is most successful if everybody contributes, but players are best off individually by not contributing anything (i.e., “free-riding”)—thereby creating a social dilemma. Analysis of climate change using the PGG and its variants has helped explain why global cooperation on GHG reductions is so difficult, as nations have an incentive to free-ride on the reductions of others. Rather than inspire collective action, it seems that the lack of progress in addressing the climate crisis is driving the search for a “quick fix” technological solution that circumvents the need for cooperation.
This seminar discussed ways in which to produce professional academic writing, from academic papers to research proposals or technical writing in general.
Machine learning (including deep and reinforcement learning) and blockchain are two of the most noticeable technologies in recent years. The first one is the foundation of artificial intelligence and big data, and the second one has significantly disrupted the financial industry. Both technologies are data-driven, and thus there are rapidly growing interests in integrating them for more secure and efficient data sharing and analysis. In this paper, we review the research on combining blockchain and machine learning technologies and demonstrate that they can collaborate efficiently and effectively. In the end, we point out some future directions and expect more researches on deeper integration of the two promising technologies.
In this talk, we discuss QuTrack, a Blockchain-based approach to track experiment and model changes primarily for AI and ML models. In addition, we discuss how change analytics can be used for process improvement and to enhance the model development and deployment processes.
More from The Statistical and Applied Mathematical Sciences Institute (20)
Remote Sensing and Computational, Evolutionary, Supercomputing, and Intellige...University of Maribor
Slides from talk:
Aleš Zamuda: Remote Sensing and Computational, Evolutionary, Supercomputing, and Intelligent Systems.
11th International Conference on Electrical, Electronics and Computer Engineering (IcETRAN), Niš, 3-6 June 2024
Inter-Society Networking Panel GRSS/MTT-S/CIS Panel Session: Promoting Connection and Cooperation
https://www.etran.rs/2024/en/home-english/
ANAMOLOUS SECONDARY GROWTH IN DICOT ROOTS.pptxRASHMI M G
Abnormal or anomalous secondary growth in plants. It defines secondary growth as an increase in plant girth due to vascular cambium or cork cambium. Anomalous secondary growth does not follow the normal pattern of a single vascular cambium producing xylem internally and phloem externally.
Richard's aventures in two entangled wonderlandsRichard Gill
Since the loophole-free Bell experiments of 2020 and the Nobel prizes in physics of 2022, critics of Bell's work have retreated to the fortress of super-determinism. Now, super-determinism is a derogatory word - it just means "determinism". Palmer, Hance and Hossenfelder argue that quantum mechanics and determinism are not incompatible, using a sophisticated mathematical construction based on a subtle thinning of allowed states and measurements in quantum mechanics, such that what is left appears to make Bell's argument fail, without altering the empirical predictions of quantum mechanics. I think however that it is a smoke screen, and the slogan "lost in math" comes to my mind. I will discuss some other recent disproofs of Bell's theorem using the language of causality based on causal graphs. Causal thinking is also central to law and justice. I will mention surprising connections to my work on serial killer nurse cases, in particular the Dutch case of Lucia de Berk and the current UK case of Lucy Letby.
ESR spectroscopy in liquid food and beverages.pptxPRIYANKA PATEL
With increasing population, people need to rely on packaged food stuffs. Packaging of food materials requires the preservation of food. There are various methods for the treatment of food to preserve them and irradiation treatment of food is one of them. It is the most common and the most harmless method for the food preservation as it does not alter the necessary micronutrients of food materials. Although irradiated food doesn’t cause any harm to the human health but still the quality assessment of food is required to provide consumers with necessary information about the food. ESR spectroscopy is the most sophisticated way to investigate the quality of the food and the free radicals induced during the processing of the food. ESR spin trapping technique is useful for the detection of highly unstable radicals in the food. The antioxidant capability of liquid food and beverages in mainly performed by spin trapping technique.
Observation of Io’s Resurfacing via Plume Deposition Using Ground-based Adapt...Sérgio Sacani
Since volcanic activity was first discovered on Io from Voyager images in 1979, changes
on Io’s surface have been monitored from both spacecraft and ground-based telescopes.
Here, we present the highest spatial resolution images of Io ever obtained from a groundbased telescope. These images, acquired by the SHARK-VIS instrument on the Large
Binocular Telescope, show evidence of a major resurfacing event on Io’s trailing hemisphere. When compared to the most recent spacecraft images, the SHARK-VIS images
show that a plume deposit from a powerful eruption at Pillan Patera has covered part
of the long-lived Pele plume deposit. Although this type of resurfacing event may be common on Io, few have been detected due to the rarity of spacecraft visits and the previously low spatial resolution available from Earth-based telescopes. The SHARK-VIS instrument ushers in a new era of high resolution imaging of Io’s surface using adaptive
optics at visible wavelengths.
Phenomics assisted breeding in crop improvementIshaGoswami9
As the population is increasing and will reach about 9 billion upto 2050. Also due to climate change, it is difficult to meet the food requirement of such a large population. Facing the challenges presented by resource shortages, climate
change, and increasing global population, crop yield and quality need to be improved in a sustainable way over the coming decades. Genetic improvement by breeding is the best way to increase crop productivity. With the rapid progression of functional
genomics, an increasing number of crop genomes have been sequenced and dozens of genes influencing key agronomic traits have been identified. However, current genome sequence information has not been adequately exploited for understanding
the complex characteristics of multiple gene, owing to a lack of crop phenotypic data. Efficient, automatic, and accurate technologies and platforms that can capture phenotypic data that can
be linked to genomics information for crop improvement at all growth stages have become as important as genotyping. Thus,
high-throughput phenotyping has become the major bottleneck restricting crop breeding. Plant phenomics has been defined as the high-throughput, accurate acquisition and analysis of multi-dimensional phenotypes
during crop growing stages at the organism level, including the cell, tissue, organ, individual plant, plot, and field levels. With the rapid development of novel sensors, imaging technology,
and analysis methods, numerous infrastructure platforms have been developed for phenotyping.
Earliest Galaxies in the JADES Origins Field: Luminosity Function and Cosmic ...Sérgio Sacani
We characterize the earliest galaxy population in the JADES Origins Field (JOF), the deepest
imaging field observed with JWST. We make use of the ancillary Hubble optical images (5 filters
spanning 0.4−0.9µm) and novel JWST images with 14 filters spanning 0.8−5µm, including 7 mediumband filters, and reaching total exposure times of up to 46 hours per filter. We combine all our data
at > 2.3µm to construct an ultradeep image, reaching as deep as ≈ 31.4 AB mag in the stack and
30.3-31.0 AB mag (5σ, r = 0.1” circular aperture) in individual filters. We measure photometric
redshifts and use robust selection criteria to identify a sample of eight galaxy candidates at redshifts
z = 11.5 − 15. These objects show compact half-light radii of R1/2 ∼ 50 − 200pc, stellar masses of
M⋆ ∼ 107−108M⊙, and star-formation rates of SFR ∼ 0.1−1 M⊙ yr−1
. Our search finds no candidates
at 15 < z < 20, placing upper limits at these redshifts. We develop a forward modeling approach to
infer the properties of the evolving luminosity function without binning in redshift or luminosity that
marginalizes over the photometric redshift uncertainty of our candidate galaxies and incorporates the
impact of non-detections. We find a z = 12 luminosity function in good agreement with prior results,
and that the luminosity function normalization and UV luminosity density decline by a factor of ∼ 2.5
from z = 12 to z = 14. We discuss the possible implications of our results in the context of theoretical
models for evolution of the dark matter halo mass function.
DERIVATION OF MODIFIED BERNOULLI EQUATION WITH VISCOUS EFFECTS AND TERMINAL V...Wasswaderrick3
In this book, we use conservation of energy techniques on a fluid element to derive the Modified Bernoulli equation of flow with viscous or friction effects. We derive the general equation of flow/ velocity and then from this we derive the Pouiselle flow equation, the transition flow equation and the turbulent flow equation. In the situations where there are no viscous effects , the equation reduces to the Bernoulli equation. From experimental results, we are able to include other terms in the Bernoulli equation. We also look at cases where pressure gradients exist. We use the Modified Bernoulli equation to derive equations of flow rate for pipes of different cross sectional areas connected together. We also extend our techniques of energy conservation to a sphere falling in a viscous medium under the effect of gravity. We demonstrate Stokes equation of terminal velocity and turbulent flow equation. We look at a way of calculating the time taken for a body to fall in a viscous medium. We also look at the general equation of terminal velocity.
The use of Nauplii and metanauplii artemia in aquaculture (brine shrimp).pptxMAGOTI ERNEST
Although Artemia has been known to man for centuries, its use as a food for the culture of larval organisms apparently began only in the 1930s, when several investigators found that it made an excellent food for newly hatched fish larvae (Litvinenko et al., 2023). As aquaculture developed in the 1960s and ‘70s, the use of Artemia also became more widespread, due both to its convenience and to its nutritional value for larval organisms (Arenas-Pardo et al., 2024). The fact that Artemia dormant cysts can be stored for long periods in cans, and then used as an off-the-shelf food requiring only 24 h of incubation makes them the most convenient, least labor-intensive, live food available for aquaculture (Sorgeloos & Roubach, 2021). The nutritional value of Artemia, especially for marine organisms, is not constant, but varies both geographically and temporally. During the last decade, however, both the causes of Artemia nutritional variability and methods to improve poorquality Artemia have been identified (Loufi et al., 2024).
Brine shrimp (Artemia spp.) are used in marine aquaculture worldwide. Annually, more than 2,000 metric tons of dry cysts are used for cultivation of fish, crustacean, and shellfish larva. Brine shrimp are important to aquaculture because newly hatched brine shrimp nauplii (larvae) provide a food source for many fish fry (Mozanzadeh et al., 2021). Culture and harvesting of brine shrimp eggs represents another aspect of the aquaculture industry. Nauplii and metanauplii of Artemia, commonly known as brine shrimp, play a crucial role in aquaculture due to their nutritional value and suitability as live feed for many aquatic species, particularly in larval stages (Sorgeloos & Roubach, 2021).
hematic appreciation test is a psychological assessment tool used to measure an individual's appreciation and understanding of specific themes or topics. This test helps to evaluate an individual's ability to connect different ideas and concepts within a given theme, as well as their overall comprehension and interpretation skills. The results of the test can provide valuable insights into an individual's cognitive abilities, creativity, and critical thinking skills
The ability to recreate computational results with minimal effort and actionable metrics provides a solid foundation for scientific research and software development. When people can replicate an analysis at the touch of a button using open-source software, open data, and methods to assess and compare proposals, it significantly eases verification of results, engagement with a diverse range of contributors, and progress. However, we have yet to fully achieve this; there are still many sociotechnical frictions.
Inspired by David Donoho's vision, this talk aims to revisit the three crucial pillars of frictionless reproducibility (data sharing, code sharing, and competitive challenges) with the perspective of deep software variability.
Our observation is that multiple layers — hardware, operating systems, third-party libraries, software versions, input data, compile-time options, and parameters — are subject to variability that exacerbates frictions but is also essential for achieving robust, generalizable results and fostering innovation. I will first review the literature, providing evidence of how the complex variability interactions across these layers affect qualitative and quantitative software properties, thereby complicating the reproduction and replication of scientific studies in various fields.
I will then present some software engineering and AI techniques that can support the strategic exploration of variability spaces. These include the use of abstractions and models (e.g., feature models), sampling strategies (e.g., uniform, random), cost-effective measurements (e.g., incremental build of software configurations), and dimensionality reduction methods (e.g., transfer learning, feature selection, software debloating).
I will finally argue that deep variability is both the problem and solution of frictionless reproducibility, calling the software science community to develop new methods and tools to manage variability and foster reproducibility in software systems.
Exposé invité Journées Nationales du GDR GPL 2024
What is greenhouse gasses and how many gasses are there to affect the Earth.moosaasad1975
What are greenhouse gasses how they affect the earth and its environment what is the future of the environment and earth how the weather and the climate effects.
CLIM Undergraduate Workshop: Introduction to Spatial Data Analysis with R - Maggie Johnson, Oct 23, 2017
1. An Introduction to Spatial Data Analysis
Maggie Johnson
Statistical and Applied Mathematical Sciences Institute
North Carolina State University
mjohnson@samsi.info
CLIM Undergraduate Workshop
M. Johnson (SAMSI) CLIM Undergrad Wksh October 23, 2017 1 / 27
2. Dependent Data
The First Law of Geography
“Everything is related to everything else, but near things are more related than
distant things.” – Waldo Tobler
Time
AirPassengers
1950 1952 1954 1956 1958 1960
100400
Figure: Time series data
−92 −90 −88 −86 −84
38404244
0
50
100
150
June 18, 1987 Ozone Conc
Figure: Spatial data
M. Johnson (SAMSI) CLIM Undergrad Wksh October 23, 2017 2 / 27
3. Spatial Data
The term spatial data is often used to refer to data that are connected to
physical geographical locations.
Notation:
D ⊂ Rd
represents the spatial domain, usually d = 2
s ∈ D is a d-dimensional vector representing a “location” in space. e.g.
s ≡ (longitude, latitude)
Three main types of spatial data
Point-referenced (geostatistical) data
Areal-referenced data
Point process data
M. Johnson (SAMSI) CLIM Undergrad Wksh October 23, 2017 3 / 27
4. Point-Referenced Data
Features:
Data are observations of a continuous spatial process
We only observed data at a subset of fixed locations
Goals:
Main goal is often prediction at unobserved locations
Examples:
Daily maximum temperature data collected at land surface monitoring
stations across the US
Ozone concentration measured at stations
M. Johnson (SAMSI) CLIM Undergrad Wksh October 23, 2017 4 / 27
5. Point-Referenced (Geostatistical) Data
−86 −84 −82 −80 −78
34353637383940
GHCN Station Locations
−86 −84 −82 −80 −78
34353637383940
22
24
26
28
30
32
July Average Max Temp
M. Johnson (SAMSI) CLIM Undergrad Wksh October 23, 2017 5 / 27
6. Focus for Today
Point referenced data (geostatistics)
Prediction at unobserved loations
M. Johnson (SAMSI) CLIM Undergrad Wksh October 23, 2017 6 / 27
7. Data
Average Maximum July Temperature and Elevation
−86 −84 −82 −80 −78
34353637383940
22
24
26
28
30
32
Avg Maximum July Temp
−86 −84 −82 −80 −78
34353637383940
200
400
600
800
1000
1200
1400
Elevation
M. Johnson (SAMSI) CLIM Undergrad Wksh October 23, 2017 7 / 27
8. Correlation
Correlation: A numeric measure of the relationship between two variables, ranges
between -1 and 1.
If two variables are correlated, knowing the value of one variable provides
information about what we expect the value of the other variable should be.
−2 −1 0 1 2 3
−2−1012
Corr = 0.81
x
y
−2 −1 0 1 2 3
−3−2−1012
Corr = −0.56
x
y
M. Johnson (SAMSI) CLIM Undergrad Wksh October 23, 2017 8 / 27
9. Average Maximum July Temperature and Elevation
0 500 1000 1500
222426283032
Corr = −0.82
Elevation
AvgJulyMaxTemp
M. Johnson (SAMSI) CLIM Undergrad Wksh October 23, 2017 9 / 27
10. Exploring Spatial Dependence
Correlogram
An exploratory visualization of the correlation between locations as a function of
distance.
1 Compute the pairwise distance between all locations
dist(s1, s2) = (lat1 − lat2)2 + (lon1 − lon2)2
2 Bin distances into a set of groups, estimate correlation
3 Plot estimated correlations against distance
M. Johnson (SAMSI) CLIM Undergrad Wksh October 23, 2017 10 / 27
11. Exploring Spatial Dependence
0 1 2 3 4 5 6 7
−1.0−0.50.00.51.0
Avg July Temp
Distance
Correlation
0 1 2 3 4 5 6 7
−1.0−0.50.00.51.0
Independent Data
Distance
Correlation
M. Johnson (SAMSI) CLIM Undergrad Wksh October 23, 2017 11 / 27
12. Linear Regression Model
The classical simple linear regression model assumes
ObservedValue = β0 + Covariate∗
β1 + error
For example,
ObservedTemperature(s) = β0 + Elevation(s)∗
β1 + error(s)
is the linear model defining temperature at a location (s) as a linear function of
elevation at that location.
errors are assumed independent and normally distributed (N(0, σ2
))
β0, β1 and σ2
are unknown, so to use the model we need to estimate them
(use R!)
M. Johnson (SAMSI) CLIM Undergrad Wksh October 23, 2017 12 / 27
13. Average Maximum July Temperature and Elevation
Idea is to find “best fit
line”, y = mx + b to the
data
Using R, we get
Temp = 32.666 + Elev∗
(−0.0065)
0 500 1000 1500
222426283032
Elevation
AvgJulyMaxTemp
M. Johnson (SAMSI) CLIM Undergrad Wksh October 23, 2017 13 / 27
14. Prediction using the simple linear regression model
Once we’ve estimated the model, as long as we have a value of elevation at a new
location s0 we can predict temperature at that location.
PredictedTemp(s0) = 32.366 + Elevation(s0)∗
(−0.0065)
−86 −84 −82 −80 −78
34353637383940
24
26
28
30
32
Predictions
M. Johnson (SAMSI) CLIM Undergrad Wksh October 23, 2017 14 / 27
15. How reasonable are the predictions?
Look at the residuals, Observed Temp(s) - Predicted Temp(s)
Residuals indicate the “errors” made by our model
Remember the model assumes errors are random and independent of each
other
−86 −84 −82 −80 −78
34353637383940
−2
−1
0
1
Elevation Residuals
M. Johnson (SAMSI) CLIM Undergrad Wksh October 23, 2017 15 / 27
16. Spatial correlation in the residuals?
Look at a correlogram of the residuals
0 1 2 3 4 5 6 7
−1.0−0.50.00.51.0
Empirical Correlogram
Distance
Correlation
M. Johnson (SAMSI) CLIM Undergrad Wksh October 23, 2017 16 / 27
17. Add Latitude and Longitude
PredTemp(s) = 49.037 + Lat(s)∗
(−0.48) + Long(s)∗
(−0.13) + Elev(s)∗
(−0.006)
−86 −84 −82 −80 −78
34353637383940
24
26
28
30
32
Predictions
M. Johnson (SAMSI) CLIM Undergrad Wksh October 23, 2017 17 / 27
18. Look at the residuals
−86 −84 −82 −80 −78
34353637383940
−1.5
−1.0
−0.5
0.0
0.5
1.0
Long + Lat + Elev Residuals
M. Johnson (SAMSI) CLIM Undergrad Wksh October 23, 2017 18 / 27
19. Spatial correlation in the residuals?
Look at a correlogram of the residuals
0 1 2 3 4 5 6 7
−1.0−0.50.00.51.0
Empirical Correlogram
Distance
Correlation
How do we incorporate the remaining dependence between locations into the
model?
M. Johnson (SAMSI) CLIM Undergrad Wksh October 23, 2017 19 / 27
20. Additive Geostatistical Modeling
An additive spatial regression model includes an additional component to model
the remaining spatial dependence in the residuals.
Observation(s) = Regression Terms(s) +g(s) + error(s)
The g(s) term is a spatial process model which allows us to model the dependence
between any two locations as a function of the distance between them.
g(s) is assumed to be a Gaussian process
Models the dependence between any two locations through a specified
correlation (or covariance) function, which have additional parameters that
need to be estimated (use R!)
Think fitting a curve to the correlogram
Commonly used functions are the exponential and Mat´ern
M. Johnson (SAMSI) CLIM Undergrad Wksh October 23, 2017 20 / 27
21. Additive Geostatistical Modeling
Prediction
Prediction at a new location s0 is
Prediction = Regression Terms + Weighted Sum of Observations
Same idea as with the independent linear model, except now an additional
weighted average of the observed data at all locations is included in the
prediction.
Data observed at locations closest to the prediction location have highest
weights.
Under this model, predictions can be obtained even in the absence of
covariates!
M. Johnson (SAMSI) CLIM Undergrad Wksh October 23, 2017 21 / 27
22. Geostatistical Model with Long, Lat as Covariates
−86 −84 −82 −80 −78
34353637383940
26
28
30
32
Predictions
−86 −84 −82 −80 −7834353637383940
0.4
0.5
0.6
0.7
0.8
0.9
1.0
Standard Errors
M. Johnson (SAMSI) CLIM Undergrad Wksh October 23, 2017 22 / 27
23. Geostatistical Model with Long, Lat as Covariates
M. Johnson (SAMSI) CLIM Undergrad Wksh October 23, 2017 23 / 27
24. Geostatistical Model with Long, Lat, as Covariates
2 4 6 8
−1.0−0.50.00.51.0
Empirical Correlogram
Correlation
−86 −84 −82 −80 −78
34353637383940
−4
−3
−2
−1
0
1
Residuals
M. Johnson (SAMSI) CLIM Undergrad Wksh October 23, 2017 24 / 27
25. Geostatistical Model with Long, Lat, Elevation as
Covariates
−86 −84 −82 −80 −78
34353637383940
22
24
26
28
30
32
Predictions
M. Johnson (SAMSI) CLIM Undergrad Wksh October 23, 2017 25 / 27
26. Geostatistical Model with Long, Lat as Covariates
M. Johnson (SAMSI) CLIM Undergrad Wksh October 23, 2017 26 / 27
27. Geostatistical Model with Long, Lat, Elevation as
Covariates
0 2 4 6 8
−1.0−0.50.00.51.0
Empirical Correlogram
Correlation
−86 −84 −82 −80 −78
34353637383940
−0.5
0.0
0.5
Residuals
M. Johnson (SAMSI) CLIM Undergrad Wksh October 23, 2017 27 / 27