What is probabilistic programming? By analogy: if functional programming is programming with first-class functions and equational reasoning, probabilistic programming is programming with first-class distributions and Bayesian inference. All computable probability distributions can be encoded as probabilistic programs, and every probabilistic program represents a probability distribution.
What does it do? It gives a concise language for specifying complex, structured statistical models, and abstracts over the implementation details of exact and approximate inference algorithms. These models can be networked, causal, hierarchical, recursive, anything: the graph structure of the program is the generative structure of the distribution.
Who's interested? Cognitive scientists, statisticians, machine-learning specialists, and artificial-intelligence researchers.
Probabilistic programming is a new approach to machine learning and data science that is currently the focus of intense academic research, including an ongoing DARPA program. If successful, probabilistic programming systems will allow sophisticated predictive models to be written by a wide range of domain experts. Before we get to the promised land, though, some basic challenges need to be addressed, including performance on real-world datasets, programming tools support, and education.
Probabilistic programming is a new approach to machine learning and data science that is currently the focus of intense academic research, including an ongoing DARPA program. If successful, probabilistic programming systems will allow sophisticated predictive models to be written by a wide range of domain experts. Before we get to the promised land, though, some basic challenges need to be addressed, including performance on real-world datasets, programming tools support, and education.
Arabic Handwritten Text Recognition and Writer IdentificationMustafa Salam
A seminar of Ph.D. theses which explain a proposed system for recognize the Arabic handwritten text and identify the text writer. Several proposed steps are described in details in this seminar and the obtained results are viewed in detail.
Artificial Intelligence: What Is Reinforcement Learning?Bernard Marr
Reinforcement learning is one of the most discussed, followed and contemplated topics in artificial intelligence (AI) as it has the potential to transform most businesses. In this SlideShare, I want to provide a simple guide that explains reinforcement learning and give you some practical examples of how it is used today.
Tensors are higher order extensions of matrices that can incorporate multiple modalities and encode higher order relationships in data. Tensors play a significant role in machine learning through (1) tensor contractions, (2) tensor sketches, and (3) tensor decompositions. Tensor contractions are extensions of matrix products to higher dimensions. Tensor sketches efficiently compress tensors while preserving information. Tensor decompositions compute low rank components that constitute a tensor.
A technical talk discussing how to use the Markov Chain Monte Carlo methods inPyMC3 to deliver novel Bayesian Statistical models. Our case study is how to infer the strengths of Rugby teams from the Six Nations. This talk was delivered at the University of Cambridge in 2015.
Arabic Handwritten Text Recognition and Writer IdentificationMustafa Salam
A seminar of Ph.D. theses which explain a proposed system for recognize the Arabic handwritten text and identify the text writer. Several proposed steps are described in details in this seminar and the obtained results are viewed in detail.
Artificial Intelligence: What Is Reinforcement Learning?Bernard Marr
Reinforcement learning is one of the most discussed, followed and contemplated topics in artificial intelligence (AI) as it has the potential to transform most businesses. In this SlideShare, I want to provide a simple guide that explains reinforcement learning and give you some practical examples of how it is used today.
Tensors are higher order extensions of matrices that can incorporate multiple modalities and encode higher order relationships in data. Tensors play a significant role in machine learning through (1) tensor contractions, (2) tensor sketches, and (3) tensor decompositions. Tensor contractions are extensions of matrix products to higher dimensions. Tensor sketches efficiently compress tensors while preserving information. Tensor decompositions compute low rank components that constitute a tensor.
A technical talk discussing how to use the Markov Chain Monte Carlo methods inPyMC3 to deliver novel Bayesian Statistical models. Our case study is how to infer the strengths of Rugby teams from the Six Nations. This talk was delivered at the University of Cambridge in 2015.
This report outlines market trends in 2013 and provides growth prospects for the investment, office, logistics and retail property markets in France and the Paris region.
Building graphs to discover information by David Martínez at Big Data Spain 2015Big Data Spain
The basic challenge of a data scientist is to unveil information from raw data. Traditional machine learning algorithms have treated “pure” data analytics situations that should comply with a set of restrictions, such as access to labels, a clear prediction objective… However, the reality in practice shows that, due to the wide spread of data science nowadays, the exception is the norm and it is usual to encounter situations that depend on gathering information from raw data which lacks any kind of structure, or objective that classic approaches assume. In these situations, building a graph that encodes the information we are trying to unveil is the most intuitive place to start or even the only one feasible when we lack any field knowledge or previously stated aim. Unfortunately, building a graph when the number of nodes is huge from scratch is a challenging task computationally, and requires some approximations to make it feasible. In this review, we will talk about the most standard way of building those graphs in practice, and how to exploit them to solve data science tasks.
Session presented at Big Data Spain 2015 Conference
15th Oct 2015
Kinépolis Madrid
http://www.bigdataspain.org
Event promoted by: http://www.paradigmatecnologico.com
Abstract: http://www.bigdataspain.org/program/thu/slot-11.html#spch11.2
Evolving as a professional software developerAnton Kirillov
This is second edition of my keynote "On Being a Professional Software Developer" with slide comments (in Russian) which contain main ideas of the keynote.
I hope the slides could be used as a standalone reading material.
Professor Steve Roberts; The Bayesian Crowd: scalable information combinati...Ian Morgan
Professor Steve Roberts, Machine learning research group and Oxford-Man Institute + Alan Turing Institute. Steve gave this talk on the 24th January at the London Bayes Nets meetup.
In this invited talk (for the Visual Language Lab at Tilburg University, Netherlands) I discuss my laboratory's recent work on Interactive Narrative Intelligence: research in support of understanding what narrative design is (as a practice), how we might design narratives intentionally, and how we might best support it. In it, I cover a variety of papers across systems, psychology, artificial intelligence, virtual reality, games, and narrative that tackle different facets of these still-open questions, and outline further (more concrete) open questions for future work.
1) Computational Thinking – WHAT, HOW, WHY
2) Block-based Language – HANDS-ON
Before 21st century- you had to understand how computer programs work.
In 21st century, you need to know how to make computers work the way you want.
https://telecombcn-dl.github.io/2018-dlai/
Deep learning technologies are at the core of the current revolution in artificial intelligence for multimedia data analysis. The convergence of large-scale annotated datasets and affordable GPU hardware has allowed the training of neural networks for data analysis tasks which were previously addressed with hand-crafted features. Architectures such as convolutional neural networks, recurrent neural networks or Q-nets for reinforcement learning have shaped a brand new scenario in signal processing. This course will cover the basic principles of deep learning from both an algorithmic and computational perspectives.
How to Ground A Language for Legal Discourse In a Prototypical Perceptual Sem...L. Thorne McCarty
Slides for my talk at the 15th International Conference on Artificial Intelligence and Law (ICAIL 2015), June 11, 2015.
The full ICAIL 2015 paper is available on ResearchGate at bit.ly/1qCnLJq.
Modern machine learning is immensely powerful but also has very significant limitations that don't always get the attention they deserve. In this talk, I tried to contrast machine learning against AI and the original goals of that field, give some context and discuss a potential path forward.
GDC2019 - SEED - Towards Deep Generative Models in Game DevelopmentElectronic Arts / DICE
Deep learning is becoming ubiquitous in Machine Learning (ML) research, and it's also finding its place in industry-related applications. Specifically, deep generative models have proven incredibly useful at generating and remixing realistic content from scratch, making themselves a very appealing technology in the field of AI-enhanced content authoring. As part of this year's Machine Learning Tutorial at the Game Developers Conference 2019 (GDC), Jorge Del Val from SEED will cover in an accessible manner the fundamentals of deep generative modeling, including some common algorithms and architectures. He will also discuss applications to game development and explore some recent advances in the field.
The attendee will gain basic understanding of the fundamentals of generative models and how to implement them. Also, attendees will grasp potential applications in the field of game development to inspire their work and companies. This talk does not require a mathematical or machine learning background, although previous knowledge on either of those is beneficial.
A ppt based on predicting prices of houses. Also tells about basics of machine learning and the algorithm used to predict those prices by using regression technique.
Earliest Galaxies in the JADES Origins Field: Luminosity Function and Cosmic ...Sérgio Sacani
We characterize the earliest galaxy population in the JADES Origins Field (JOF), the deepest
imaging field observed with JWST. We make use of the ancillary Hubble optical images (5 filters
spanning 0.4−0.9µm) and novel JWST images with 14 filters spanning 0.8−5µm, including 7 mediumband filters, and reaching total exposure times of up to 46 hours per filter. We combine all our data
at > 2.3µm to construct an ultradeep image, reaching as deep as ≈ 31.4 AB mag in the stack and
30.3-31.0 AB mag (5σ, r = 0.1” circular aperture) in individual filters. We measure photometric
redshifts and use robust selection criteria to identify a sample of eight galaxy candidates at redshifts
z = 11.5 − 15. These objects show compact half-light radii of R1/2 ∼ 50 − 200pc, stellar masses of
M⋆ ∼ 107−108M⊙, and star-formation rates of SFR ∼ 0.1−1 M⊙ yr−1
. Our search finds no candidates
at 15 < z < 20, placing upper limits at these redshifts. We develop a forward modeling approach to
infer the properties of the evolving luminosity function without binning in redshift or luminosity that
marginalizes over the photometric redshift uncertainty of our candidate galaxies and incorporates the
impact of non-detections. We find a z = 12 luminosity function in good agreement with prior results,
and that the luminosity function normalization and UV luminosity density decline by a factor of ∼ 2.5
from z = 12 to z = 14. We discuss the possible implications of our results in the context of theoretical
models for evolution of the dark matter halo mass function.
Slide 1: Title Slide
Extrachromosomal Inheritance
Slide 2: Introduction to Extrachromosomal Inheritance
Definition: Extrachromosomal inheritance refers to the transmission of genetic material that is not found within the nucleus.
Key Components: Involves genes located in mitochondria, chloroplasts, and plasmids.
Slide 3: Mitochondrial Inheritance
Mitochondria: Organelles responsible for energy production.
Mitochondrial DNA (mtDNA): Circular DNA molecule found in mitochondria.
Inheritance Pattern: Maternally inherited, meaning it is passed from mothers to all their offspring.
Diseases: Examples include Leber’s hereditary optic neuropathy (LHON) and mitochondrial myopathy.
Slide 4: Chloroplast Inheritance
Chloroplasts: Organelles responsible for photosynthesis in plants.
Chloroplast DNA (cpDNA): Circular DNA molecule found in chloroplasts.
Inheritance Pattern: Often maternally inherited in most plants, but can vary in some species.
Examples: Variegation in plants, where leaf color patterns are determined by chloroplast DNA.
Slide 5: Plasmid Inheritance
Plasmids: Small, circular DNA molecules found in bacteria and some eukaryotes.
Features: Can carry antibiotic resistance genes and can be transferred between cells through processes like conjugation.
Significance: Important in biotechnology for gene cloning and genetic engineering.
Slide 6: Mechanisms of Extrachromosomal Inheritance
Non-Mendelian Patterns: Do not follow Mendel’s laws of inheritance.
Cytoplasmic Segregation: During cell division, organelles like mitochondria and chloroplasts are randomly distributed to daughter cells.
Heteroplasmy: Presence of more than one type of organellar genome within a cell, leading to variation in expression.
Slide 7: Examples of Extrachromosomal Inheritance
Four O’clock Plant (Mirabilis jalapa): Shows variegated leaves due to different cpDNA in leaf cells.
Petite Mutants in Yeast: Result from mutations in mitochondrial DNA affecting respiration.
Slide 8: Importance of Extrachromosomal Inheritance
Evolution: Provides insight into the evolution of eukaryotic cells.
Medicine: Understanding mitochondrial inheritance helps in diagnosing and treating mitochondrial diseases.
Agriculture: Chloroplast inheritance can be used in plant breeding and genetic modification.
Slide 9: Recent Research and Advances
Gene Editing: Techniques like CRISPR-Cas9 are being used to edit mitochondrial and chloroplast DNA.
Therapies: Development of mitochondrial replacement therapy (MRT) for preventing mitochondrial diseases.
Slide 10: Conclusion
Summary: Extrachromosomal inheritance involves the transmission of genetic material outside the nucleus and plays a crucial role in genetics, medicine, and biotechnology.
Future Directions: Continued research and technological advancements hold promise for new treatments and applications.
Slide 11: Questions and Discussion
Invite Audience: Open the floor for any questions or further discussion on the topic.
Nutraceutical market, scope and growth: Herbal drug technologyLokesh Patil
As consumer awareness of health and wellness rises, the nutraceutical market—which includes goods like functional meals, drinks, and dietary supplements that provide health advantages beyond basic nutrition—is growing significantly. As healthcare expenses rise, the population ages, and people want natural and preventative health solutions more and more, this industry is increasing quickly. Further driving market expansion are product formulation innovations and the use of cutting-edge technology for customized nutrition. With its worldwide reach, the nutraceutical industry is expected to keep growing and provide significant chances for research and investment in a number of categories, including vitamins, minerals, probiotics, and herbal supplements.
(May 29th, 2024) Advancements in Intravital Microscopy- Insights for Preclini...Scintica Instrumentation
Intravital microscopy (IVM) is a powerful tool utilized to study cellular behavior over time and space in vivo. Much of our understanding of cell biology has been accomplished using various in vitro and ex vivo methods; however, these studies do not necessarily reflect the natural dynamics of biological processes. Unlike traditional cell culture or fixed tissue imaging, IVM allows for the ultra-fast high-resolution imaging of cellular processes over time and space and were studied in its natural environment. Real-time visualization of biological processes in the context of an intact organism helps maintain physiological relevance and provide insights into the progression of disease, response to treatments or developmental processes.
In this webinar we give an overview of advanced applications of the IVM system in preclinical research. IVIM technology is a provider of all-in-one intravital microscopy systems and solutions optimized for in vivo imaging of live animal models at sub-micron resolution. The system’s unique features and user-friendly software enables researchers to probe fast dynamic biological processes such as immune cell tracking, cell-cell interaction as well as vascularization and tumor metastasis with exceptional detail. This webinar will also give an overview of IVM being utilized in drug development, offering a view into the intricate interaction between drugs/nanoparticles and tissues in vivo and allows for the evaluation of therapeutic intervention in a variety of tissues and organs. This interdisciplinary collaboration continues to drive the advancements of novel therapeutic strategies.
A brief information about the SCOP protein database used in bioinformatics.
The Structural Classification of Proteins (SCOP) database is a comprehensive and authoritative resource for the structural and evolutionary relationships of proteins. It provides a detailed and curated classification of protein structures, grouping them into families, superfamilies, and folds based on their structural and sequence similarities.
3. Bayesian Role-Playing Games
● With enough paper, we could just write out every possible play
based on every possible dice-roll and the fixed player levels.
● How many ways could we have gotten from the start of the
game to the board we see? That's a likelihood.
● Which dice rolls were more common in games that gave us the
real play? That's a posterior probability.
● Bayesianism: unknown levels are just more dice.
– And are the actual dice weighted?
4. Bayesian Reasoning is Hard!
● P(H | E) = P(E | H)*P(H) / ∫ P(E | H)*P(H) dH
● Bayesian reasoning: how to update beliefs in response to
evidence
● But the integral in the denominator often cannot be solved
analytically.
● Approaches: conjugate priors and posteriors (for which we don't
need to evaluate it), various numerical methods
5. The Trouble with Bayesian Modeling (1)
Calculating probabilities takes time exponential in the number of relevant variables!
7. The Trouble with Bayesian Modeling (3)
● Exact Bayesian inference methods only support possible-worlds
with a finite number of things in them.
● Every combination of model and inference method currently has
to be combined manually into non-reusable code.
● Probabilistic modeling as we think we know it is inexpressive
and slow, ranging in complexity from NP-complete to
DEXPTIME.
8. Programs have generative structure
● Generative model = Rules + Randomness + Random
Parameters
– A program for generating random outcomes, given unknown
parameters.
● Probabilistic Program = Probability Distribution
● The Stochastic Lambda Calculus: a Turing-complete language
with random choices.
– In jargon: the 'probability monad'.
9. What sorts of queries?
● Sampling: 'What are some possibilities?'
● Expectation: 'What should we expect to see?'
● Support: 'What could happen?'
● Probability mass/density: 'What's the chance this happens?'
● Conditional query: 'Given this, what about that? What about
when X happens more than Y?'
– 'Given that I saw 18 black ravens, what proportion of all ravens are
black?'
10. What sorts of inference?
● Mostly sampling-based approximate inference, but there are
some exact algorithms.
● By sampling, we can reason about models where each possible
world might contain a different number of actual entities.
● Approximate inference only has a small cost of its own after we
generate many samples from the distribution.
– And advanced inference algorithms can re-use parts of computations
to sample even faster.
11. What can probabilistic programs express?
● In a 'code as data' language like Church (based on Scheme),
programs can include eval and learn arbitrary code.
● Or they can do a query about a query: inference about
inference.
● Generative structure generalizes logical and physical structure:
'The λ-calculus does not focus on the sequence of time, but rather o
12. The Elegance of Probabilistic Programming
● Programs = Distributions.
– Running a program = sampling an outcome from the distribution
– Monadic semantics: every function maps input values to output
distributions.
● Queries are just functions; language runtime performs
inference.
● '[H]allucinate possible worlds', and reason about them.
14. How can we use probabilistic programming?
● 'We would like to build computing machines that can interpret
and learn from their sensory experience, act effectively in real
time, and - ultimately - design and program themselves.'
● Professor Vikash Mansinghka, 'Natively Probabilistic Computation'
● Applications in: computer vision, cognitive science, machine
learning, natural language processing, and artificial intelligence.
15. How can I use probabilistic programming?
● Write a program simulating whatever you want to reason about.
Where you don't know which specific choice to make, choose
randomly.
● Present the language runtime's inference engine with real-world
data compatible with your model.
– And go get some coffee.
● The inference engine will learn which random choices generate
data like yours.
17. Computer Vision in 20 Lines of Code (2)
● The prior model 'knows' how to render pictures, but chooses
what to render completely at random. Inference then 'learns'
which choices most likely drew the real image.
● 'Our probabilistic graphics program did not originally support
rotation, which was needed for the AOL CAPTCHAs; adding it
required only 1 additional line of probabilistic code.'
– http://probcomp.csail.mit.edu/gpgp/
● Homework takes more than 20 lines of code!
18. Which bar should we meet at?
Social reasoning and cooperation in 11 LOC.
19. Modeling where to meet for drinks
(define (sample-location) (if (flip .55) 'popular-bar 'unpopular-bar))
(define (alice depth)
(query
(define alice-location (sample-location))
alice-location
(equal? alice-location (bob (- depth 1)))))
(define (bob depth)
(query
(define bob-location (sample-location))
bob-location
(or (= depth 0) (equal? bob-location (alice depth)))))
The correct answer is actually Libirah!
20. Generating Scenes Under Constraints
● Sample fresh coffee shops, conditioned on the constraints of good design.
● Scene generation is an open-world task: not just where to put furniture but how
much to generate is random.
● Exact inference can't even handle these models!
21. What more needs doing?
● Sampling-based approximation helps, but we need better inference
algorithms. No real-time reasoning yet.
● Techniques from compilers research help.
– One recent paper got a 600x speed-up by caching the nonrandom parts of
each computation and applying Just-In-Time compilation.
● Halting Problem, Rice's Theorem: we can't prove one inference
algorithm supreme in all circumstances.
● Reusable data: export what we've learned and pass it on to others.
● Ways to treat intractability probabilistically
22. Conclusions
● Rules = Programs, Uncertain knowledge = Random choice.
● Rules + Uncertainty = Programs + Randomness = Probabilistic
Programming
● Probabilistic programs can express any complex model, but need to
improve our inference algorithms to make reasoning tractable.
● Fields of application: cognitive science, procedural content
generation, machine learning, computer vision, Bayesian statistics,
artificial intelligence.
23. Bibliography (1)
● Probabilistic Models of Cognition
● http://probabilistic-programming.org/research/, http://forestdb.org/
● Vikash Mansinghka. Natively Probabilistic Computation. PhD thesis, Massachusetts
Institute of Technology, 2009.
● Noah D. Goodman, Vikash K. Mansinghka, Daniel M. Roy, Keith Bonawitz, and
Joshua B. Tenenbaum. Church: a language for generative models. In Proc. of
Uncertainty in Artificial Intelligence, 2008.
● Fritz Obermeyer. Automated equational reasoning in nondeterministic lambda-calculi
modulo theories H*. PhD thesis, Carnegie-Mellon University, 2009.
24. Bibliography (2)
● Vikash Mansinghka, Tejas Kulkarni, Yura Perov, Josh Tenenbaum. Approximate Bayesian
Image Interpretation using Generative Probabilistic Graphics Programs. Neural Information
Processing Systems 2013.
●
Andrew D. Gordon, Thomas A. Henzinger, Aditya V. Nori, and Sriram K. Rajamani. Probabilistic
programming. In International Conference on Software Engineering (ICSE, FOSE track), 2014.
● Andreas Stuhlmüller and Noah D. Goodman. Reasoning about Reasoning by Nested
Conditioning: Modeling Theory of Mind with Probabilistic Programs. In Cognitive Systems
Research, 2013.
● Yi-Ting Yeh, Lingfeng Yang, Matthew Watson, Noah D. Goodman, and Pat Hanrahan. 2012.
Synthesizing open worlds with constraints using locally annealed reversible jump MCMC. ACM
Trans. Graph. 31, 4, Article 56 (July 2012).