The document discusses three hypothetical examples of using Bayes' rule to calculate the probability of a condition being true given a positive test result. In the first two examples, which deal with Edwards syndrome and Down syndrome, it is shown that despite the tests having high sensitivity and specificity, the actual probability of the condition being present is quite low, around 1.7% and 9% respectively. In the third example dealing with blue eyes, the high base rate in the population results in a 95.2% chance of blue eyes given a positive test.
In collaboration with the New England Regional Genetics Network, the Weitzman Institute aims to improve access to genetics services for underserved populations by offering primary care provider educational support through a free five-part webinar series that aims to enhance provider knowledge, practice, and attitudes regarding genetic services.
In collaboration with the New England Regional Genetics Network, the Weitzman Institute aims to improve access to genetics services for underserved populations by offering primary care provider educational support through a free five-part webinar series that aims to enhance provider knowledge, practice, and attitudes regarding genetic services.
Professional air quality monitoring systems provide immediate, on-site data for analysis, compliance, and decision-making.
Monitor common gases, weather parameters, particulates.
Observation of Io’s Resurfacing via Plume Deposition Using Ground-based Adapt...Sérgio Sacani
Since volcanic activity was first discovered on Io from Voyager images in 1979, changes
on Io’s surface have been monitored from both spacecraft and ground-based telescopes.
Here, we present the highest spatial resolution images of Io ever obtained from a groundbased telescope. These images, acquired by the SHARK-VIS instrument on the Large
Binocular Telescope, show evidence of a major resurfacing event on Io’s trailing hemisphere. When compared to the most recent spacecraft images, the SHARK-VIS images
show that a plume deposit from a powerful eruption at Pillan Patera has covered part
of the long-lived Pele plume deposit. Although this type of resurfacing event may be common on Io, few have been detected due to the rarity of spacecraft visits and the previously low spatial resolution available from Earth-based telescopes. The SHARK-VIS instrument ushers in a new era of high resolution imaging of Io’s surface using adaptive
optics at visible wavelengths.
Seminar of U.V. Spectroscopy by SAMIR PANDASAMIR PANDA
Spectroscopy is a branch of science dealing the study of interaction of electromagnetic radiation with matter.
Ultraviolet-visible spectroscopy refers to absorption spectroscopy or reflect spectroscopy in the UV-VIS spectral region.
Ultraviolet-visible spectroscopy is an analytical method that can measure the amount of light received by the analyte.
Deep Behavioral Phenotyping in Systems Neuroscience for Functional Atlasing a...Ana Luísa Pinho
Functional Magnetic Resonance Imaging (fMRI) provides means to characterize brain activations in response to behavior. However, cognitive neuroscience has been limited to group-level effects referring to the performance of specific tasks. To obtain the functional profile of elementary cognitive mechanisms, the combination of brain responses to many tasks is required. Yet, to date, both structural atlases and parcellation-based activations do not fully account for cognitive function and still present several limitations. Further, they do not adapt overall to individual characteristics. In this talk, I will give an account of deep-behavioral phenotyping strategies, namely data-driven methods in large task-fMRI datasets, to optimize functional brain-data collection and improve inference of effects-of-interest related to mental processes. Key to this approach is the employment of fast multi-functional paradigms rich on features that can be well parametrized and, consequently, facilitate the creation of psycho-physiological constructs to be modelled with imaging data. Particular emphasis will be given to music stimuli when studying high-order cognitive mechanisms, due to their ecological nature and quality to enable complex behavior compounded by discrete entities. I will also discuss how deep-behavioral phenotyping and individualized models applied to neuroimaging data can better account for the subject-specific organization of domain-general cognitive systems in the human brain. Finally, the accumulation of functional brain signatures brings the possibility to clarify relationships among tasks and create a univocal link between brain systems and mental functions through: (1) the development of ontologies proposing an organization of cognitive processes; and (2) brain-network taxonomies describing functional specialization. To this end, tools to improve commensurability in cognitive science are necessary, such as public repositories, ontology-based platforms and automated meta-analysis tools. I will thus discuss some brain-atlasing resources currently under development, and their applicability in cognitive as well as clinical neuroscience.
(May 29th, 2024) Advancements in Intravital Microscopy- Insights for Preclini...Scintica Instrumentation
Intravital microscopy (IVM) is a powerful tool utilized to study cellular behavior over time and space in vivo. Much of our understanding of cell biology has been accomplished using various in vitro and ex vivo methods; however, these studies do not necessarily reflect the natural dynamics of biological processes. Unlike traditional cell culture or fixed tissue imaging, IVM allows for the ultra-fast high-resolution imaging of cellular processes over time and space and were studied in its natural environment. Real-time visualization of biological processes in the context of an intact organism helps maintain physiological relevance and provide insights into the progression of disease, response to treatments or developmental processes.
In this webinar we give an overview of advanced applications of the IVM system in preclinical research. IVIM technology is a provider of all-in-one intravital microscopy systems and solutions optimized for in vivo imaging of live animal models at sub-micron resolution. The system’s unique features and user-friendly software enables researchers to probe fast dynamic biological processes such as immune cell tracking, cell-cell interaction as well as vascularization and tumor metastasis with exceptional detail. This webinar will also give an overview of IVM being utilized in drug development, offering a view into the intricate interaction between drugs/nanoparticles and tissues in vivo and allows for the evaluation of therapeutic intervention in a variety of tissues and organs. This interdisciplinary collaboration continues to drive the advancements of novel therapeutic strategies.
The ability to recreate computational results with minimal effort and actionable metrics provides a solid foundation for scientific research and software development. When people can replicate an analysis at the touch of a button using open-source software, open data, and methods to assess and compare proposals, it significantly eases verification of results, engagement with a diverse range of contributors, and progress. However, we have yet to fully achieve this; there are still many sociotechnical frictions.
Inspired by David Donoho's vision, this talk aims to revisit the three crucial pillars of frictionless reproducibility (data sharing, code sharing, and competitive challenges) with the perspective of deep software variability.
Our observation is that multiple layers — hardware, operating systems, third-party libraries, software versions, input data, compile-time options, and parameters — are subject to variability that exacerbates frictions but is also essential for achieving robust, generalizable results and fostering innovation. I will first review the literature, providing evidence of how the complex variability interactions across these layers affect qualitative and quantitative software properties, thereby complicating the reproduction and replication of scientific studies in various fields.
I will then present some software engineering and AI techniques that can support the strategic exploration of variability spaces. These include the use of abstractions and models (e.g., feature models), sampling strategies (e.g., uniform, random), cost-effective measurements (e.g., incremental build of software configurations), and dimensionality reduction methods (e.g., transfer learning, feature selection, software debloating).
I will finally argue that deep variability is both the problem and solution of frictionless reproducibility, calling the software science community to develop new methods and tools to manage variability and foster reproducibility in software systems.
Exposé invité Journées Nationales du GDR GPL 2024
Earliest Galaxies in the JADES Origins Field: Luminosity Function and Cosmic ...Sérgio Sacani
We characterize the earliest galaxy population in the JADES Origins Field (JOF), the deepest
imaging field observed with JWST. We make use of the ancillary Hubble optical images (5 filters
spanning 0.4−0.9µm) and novel JWST images with 14 filters spanning 0.8−5µm, including 7 mediumband filters, and reaching total exposure times of up to 46 hours per filter. We combine all our data
at > 2.3µm to construct an ultradeep image, reaching as deep as ≈ 31.4 AB mag in the stack and
30.3-31.0 AB mag (5σ, r = 0.1” circular aperture) in individual filters. We measure photometric
redshifts and use robust selection criteria to identify a sample of eight galaxy candidates at redshifts
z = 11.5 − 15. These objects show compact half-light radii of R1/2 ∼ 50 − 200pc, stellar masses of
M⋆ ∼ 107−108M⊙, and star-formation rates of SFR ∼ 0.1−1 M⊙ yr−1
. Our search finds no candidates
at 15 < z < 20, placing upper limits at these redshifts. We develop a forward modeling approach to
infer the properties of the evolving luminosity function without binning in redshift or luminosity that
marginalizes over the photometric redshift uncertainty of our candidate galaxies and incorporates the
impact of non-detections. We find a z = 12 luminosity function in good agreement with prior results,
and that the luminosity function normalization and UV luminosity density decline by a factor of ∼ 2.5
from z = 12 to z = 14. We discuss the possible implications of our results in the context of theoretical
models for evolution of the dark matter halo mass function.
Earliest Galaxies in the JADES Origins Field: Luminosity Function and Cosmic ...
Bayes NIPT
1. Hypothetical Example: 1
• Non-Invasive Prenatal Testing (NIPT) is an increasingly
popular technique that screens for chromosomal
abnormalities during pregnancy.
• Huge benefits with few risks.
• They are highly accurate, with 99% of abnormalities
identified by the test.
• Your (or your partner’s) test comes back saying your baby
has Edwards syndrome (trisomy 18)…
1
3. Hypothetical Example: 1
• What is the probability that your baby actually has
Edwards syndrome?
• Suppose the sensitivity of the test is high:
• 99% of fetuses with trisomy are detected.
• Suppose the specificity of the test is also high:
• 99% of fetuses with no trisomies test negative.
• Given your positive test and this information, does
your fetus have a high or low probability of actually
having a chromosomal abnormality?
2
5. Hypothetical Example: 1
• Bayes Rule comes to the rescue!!
• Let A be the event “Have trisomy 18”
• Let B be the event “Test Positive for trisomy 18”
4
6. Hypothetical Example: 1
• Bayes Rule comes to the rescue!!
• Let A be the event “Have trisomy 18”
• Let B be the event “Test Positive for trisomy 18”
• We want P(A | B) in terms we can easily quantify…
4
7. Hypothetical Example: 1
• Bayes Rule comes to the rescue!!
• Let A be the event “Have trisomy 18”
• Let B be the event “Test Positive for trisomy 18”
• We want P(A | B) in terms we can easily quantify…
• Recall:
4
8. Hypothetical Example: 1
• Bayes Rule comes to the rescue!!
• Let A be the event “Have trisomy 18”
• Let B be the event “Test Positive for trisomy 18”
• We want P(A | B) in terms we can easily quantify…
• Recall:
4
9. Hypothetical Example: 1
• Bayes Rule comes to the rescue!!
• Let A be the event “Have trisomy 18”
• Let B be the event “Test Positive for trisomy 18”
• We want P(A | B) in terms we can easily quantify…
• Recall:
4
10. Hypothetical Example: 1
• Bayes Rule comes to the rescue!!
• Let A be the event “Have trisomy 18”
• Let B be the event “Test Positive for trisomy 18”
• We want P(A | B) in terms we can easily quantify…
• Recall:
4
11. Hypothetical Example: 1
• Bayes Rule comes to the rescue!!
• Let A be the event “Have trisomy 18”
• Let B be the event “Test Positive for trisomy 18”
• We want P(A | B) in terms we can easily quantify:
5
“Probability of
having trisomy 18
given positive test”
“Probability of having
positive test given
you have trisomy 18”
“Probability
of having
trisomy 18”
“Probability of
testing positive
for trisomy 18”
12. Hypothetical Example: 1
• A = “Have trisomy 18”; B = “Test Positive for trisomy 18”
• In the USA, P(A) = Pr(have trisomy 18) = 1/6000 = 1.7e-4
• Sensitivity: P(B|A) = 0.99; Specificity: P(BC|AC)=0.99
6
13. Hypothetical Example: 1
• A = “Have trisomy 18”; B = “Test Positive for trisomy 18”
• In the USA, P(A) = Pr(have trisomy 18) = 1/6000 = 1.7e-4
• Sensitivity: P(B|A) = 0.99; Specificity: P(BC|AC)=0.99
6
14. Hypothetical Example: 1
• A = “Have trisomy 18”; B = “Test Positive for trisomy 18”
• In the USA, P(A) = Pr(have trisomy 18) = 1/6000 = 1.7e-4
• Sensitivity: P(B|A) = 0.99; Specificity: P(BC|AC)=0.99
6
15. Hypothetical Example: 1
• A = “Have trisomy 18”; B = “Test Positive for trisomy 18”
• In the USA, P(A) = Pr(have trisomy 18) = 1/6000 = 1.7e-4
• Sensitivity: P(B|A) = 0.99; Specificity: P(BC|AC)=0.99
6
16. Hypothetical Example: 1
• A = “Have trisomy 18”; B = “Test Positive for trisomy 18”
• In the USA, P(A) = Pr(have trisomy 18) = 1/6000 = 1.7e-4
• Sensitivity: P(B|A) = 0.99; Specificity: P(BC|AC)=0.99
• Thus, there is only a ~1.7% chance your fetus has
trisomy 18, despite the high sensitivity and specificity!!
6
17. Hypothetical Example: 2
• Down Syndrome: trisomy 21.
• A = “Have trisomy 21”; B = “Test Positive for trisomy
21”
• In the USA, P(A) = Pr(have trisomy 21) = 1/1000 = 1e-3
7
18. Hypothetical Example: 2
• Down Syndrome: trisomy 21.
• A = “Have trisomy 21”; B = “Test Positive for trisomy
21”
• In the USA, P(A) = Pr(have trisomy 21) = 1/1000 = 1e-3
• Sensitivity: P(B|A) = 0.99; Specificity: P(BC|AC)=0.99
• Thus, there is only a 9% chance your fetus has trisomy
21, despite the high sensitivity and specificity!!
7
19. Hypothetical Example: 3
• A = “Have blue eyes”; B = “Test Positive for OCA2
variant”
• In the USA, P(A) = Pr(have blue eyes) = 1/6 = 0.166
• Sensitivity: P(B|A) = 0.99; Specificity: P(BC|AC)=0.99
8
20. Hypothetical Example: 3
• A = “Have blue eyes”; B = “Test Positive for OCA2
variant”
• In the USA, P(A) = Pr(have blue eyes) = 1/6 = 0.166
• Sensitivity: P(B|A) = 0.99; Specificity: P(BC|AC)=0.99
• Thus, there is a 95.2% chance your baby will have blue
eyes!
8