The document discusses various topics related to active learning and optimal experimental design. It begins with motivations for active learning such as collecting higher quality data being more useful than simply more data, and data collection often being expensive. It provides examples of applying active learning techniques to problems like classification with gene expression data, collaborative filtering for movie recommendations, sequencing genomes, and improving cell culture conditions. It then covers topics like uncertainty sampling, query by committee, and information-based loss functions for active learning. For optimal experimental design, it discusses techniques like A-optimal, D-optimal, and E-optimal design and how they can be applied to problems with linear models. It also covers extensions to non-linear models using techniques like sequential experimental design
Seminar of U.V. Spectroscopy by SAMIR PANDASAMIR PANDA
Spectroscopy is a branch of science dealing the study of interaction of electromagnetic radiation with matter.
Ultraviolet-visible spectroscopy refers to absorption spectroscopy or reflect spectroscopy in the UV-VIS spectral region.
Ultraviolet-visible spectroscopy is an analytical method that can measure the amount of light received by the analyte.
Salas, V. (2024) "John of St. Thomas (Poinsot) on the Science of Sacred Theol...Studia Poinsotiana
I Introduction
II Subalternation and Theology
III Theology and Dogmatic Declarations
IV The Mixed Principles of Theology
V Virtual Revelation: The Unity of Theology
VI Theology as a Natural Science
VII Theology’s Certitude
VIII Conclusion
Notes
Bibliography
All the contents are fully attributable to the author, Doctor Victor Salas. Should you wish to get this text republished, get in touch with the author or the editorial committee of the Studia Poinsotiana. Insofar as possible, we will be happy to broker your contact.
Seminar of U.V. Spectroscopy by SAMIR PANDASAMIR PANDA
Spectroscopy is a branch of science dealing the study of interaction of electromagnetic radiation with matter.
Ultraviolet-visible spectroscopy refers to absorption spectroscopy or reflect spectroscopy in the UV-VIS spectral region.
Ultraviolet-visible spectroscopy is an analytical method that can measure the amount of light received by the analyte.
Salas, V. (2024) "John of St. Thomas (Poinsot) on the Science of Sacred Theol...Studia Poinsotiana
I Introduction
II Subalternation and Theology
III Theology and Dogmatic Declarations
IV The Mixed Principles of Theology
V Virtual Revelation: The Unity of Theology
VI Theology as a Natural Science
VII Theology’s Certitude
VIII Conclusion
Notes
Bibliography
All the contents are fully attributable to the author, Doctor Victor Salas. Should you wish to get this text republished, get in touch with the author or the editorial committee of the Studia Poinsotiana. Insofar as possible, we will be happy to broker your contact.
Slide 1: Title Slide
Extrachromosomal Inheritance
Slide 2: Introduction to Extrachromosomal Inheritance
Definition: Extrachromosomal inheritance refers to the transmission of genetic material that is not found within the nucleus.
Key Components: Involves genes located in mitochondria, chloroplasts, and plasmids.
Slide 3: Mitochondrial Inheritance
Mitochondria: Organelles responsible for energy production.
Mitochondrial DNA (mtDNA): Circular DNA molecule found in mitochondria.
Inheritance Pattern: Maternally inherited, meaning it is passed from mothers to all their offspring.
Diseases: Examples include Leber’s hereditary optic neuropathy (LHON) and mitochondrial myopathy.
Slide 4: Chloroplast Inheritance
Chloroplasts: Organelles responsible for photosynthesis in plants.
Chloroplast DNA (cpDNA): Circular DNA molecule found in chloroplasts.
Inheritance Pattern: Often maternally inherited in most plants, but can vary in some species.
Examples: Variegation in plants, where leaf color patterns are determined by chloroplast DNA.
Slide 5: Plasmid Inheritance
Plasmids: Small, circular DNA molecules found in bacteria and some eukaryotes.
Features: Can carry antibiotic resistance genes and can be transferred between cells through processes like conjugation.
Significance: Important in biotechnology for gene cloning and genetic engineering.
Slide 6: Mechanisms of Extrachromosomal Inheritance
Non-Mendelian Patterns: Do not follow Mendel’s laws of inheritance.
Cytoplasmic Segregation: During cell division, organelles like mitochondria and chloroplasts are randomly distributed to daughter cells.
Heteroplasmy: Presence of more than one type of organellar genome within a cell, leading to variation in expression.
Slide 7: Examples of Extrachromosomal Inheritance
Four O’clock Plant (Mirabilis jalapa): Shows variegated leaves due to different cpDNA in leaf cells.
Petite Mutants in Yeast: Result from mutations in mitochondrial DNA affecting respiration.
Slide 8: Importance of Extrachromosomal Inheritance
Evolution: Provides insight into the evolution of eukaryotic cells.
Medicine: Understanding mitochondrial inheritance helps in diagnosing and treating mitochondrial diseases.
Agriculture: Chloroplast inheritance can be used in plant breeding and genetic modification.
Slide 9: Recent Research and Advances
Gene Editing: Techniques like CRISPR-Cas9 are being used to edit mitochondrial and chloroplast DNA.
Therapies: Development of mitochondrial replacement therapy (MRT) for preventing mitochondrial diseases.
Slide 10: Conclusion
Summary: Extrachromosomal inheritance involves the transmission of genetic material outside the nucleus and plays a crucial role in genetics, medicine, and biotechnology.
Future Directions: Continued research and technological advancements hold promise for new treatments and applications.
Slide 11: Questions and Discussion
Invite Audience: Open the floor for any questions or further discussion on the topic.
This presentation explores a brief idea about the structural and functional attributes of nucleotides, the structure and function of genetic materials along with the impact of UV rays and pH upon them.
Observation of Io’s Resurfacing via Plume Deposition Using Ground-based Adapt...Sérgio Sacani
Since volcanic activity was first discovered on Io from Voyager images in 1979, changes
on Io’s surface have been monitored from both spacecraft and ground-based telescopes.
Here, we present the highest spatial resolution images of Io ever obtained from a groundbased telescope. These images, acquired by the SHARK-VIS instrument on the Large
Binocular Telescope, show evidence of a major resurfacing event on Io’s trailing hemisphere. When compared to the most recent spacecraft images, the SHARK-VIS images
show that a plume deposit from a powerful eruption at Pillan Patera has covered part
of the long-lived Pele plume deposit. Although this type of resurfacing event may be common on Io, few have been detected due to the rarity of spacecraft visits and the previously low spatial resolution available from Earth-based telescopes. The SHARK-VIS instrument ushers in a new era of high resolution imaging of Io’s surface using adaptive
optics at visible wavelengths.
The ability to recreate computational results with minimal effort and actionable metrics provides a solid foundation for scientific research and software development. When people can replicate an analysis at the touch of a button using open-source software, open data, and methods to assess and compare proposals, it significantly eases verification of results, engagement with a diverse range of contributors, and progress. However, we have yet to fully achieve this; there are still many sociotechnical frictions.
Inspired by David Donoho's vision, this talk aims to revisit the three crucial pillars of frictionless reproducibility (data sharing, code sharing, and competitive challenges) with the perspective of deep software variability.
Our observation is that multiple layers — hardware, operating systems, third-party libraries, software versions, input data, compile-time options, and parameters — are subject to variability that exacerbates frictions but is also essential for achieving robust, generalizable results and fostering innovation. I will first review the literature, providing evidence of how the complex variability interactions across these layers affect qualitative and quantitative software properties, thereby complicating the reproduction and replication of scientific studies in various fields.
I will then present some software engineering and AI techniques that can support the strategic exploration of variability spaces. These include the use of abstractions and models (e.g., feature models), sampling strategies (e.g., uniform, random), cost-effective measurements (e.g., incremental build of software configurations), and dimensionality reduction methods (e.g., transfer learning, feature selection, software debloating).
I will finally argue that deep variability is both the problem and solution of frictionless reproducibility, calling the software science community to develop new methods and tools to manage variability and foster reproducibility in software systems.
Exposé invité Journées Nationales du GDR GPL 2024
THE IMPORTANCE OF MARTIAN ATMOSPHERE SAMPLE RETURN.Sérgio Sacani
The return of a sample of near-surface atmosphere from Mars would facilitate answers to several first-order science questions surrounding the formation and evolution of the planet. One of the important aspects of terrestrial planet formation in general is the role that primary atmospheres played in influencing the chemistry and structure of the planets and their antecedents. Studies of the martian atmosphere can be used to investigate the role of a primary atmosphere in its history. Atmosphere samples would also inform our understanding of the near-surface chemistry of the planet, and ultimately the prospects for life. High-precision isotopic analyses of constituent gases are needed to address these questions, requiring that the analyses are made on returned samples rather than in situ.
2. Motivation
• Better data is often more useful than simply
more data (quality over quantity)
• Data collection may be expensive
– Cost of time and materials for an experiment
– Cheap vs. expensive data
• Raw images vs. annotated images
• Want to collect best data at minimal cost
3. Toy Example: 1D classifier
x x x x x x x
x
x
x
hw(x) = 1 if x > w (0 otherwise)
Classifier (threshold function):
Naïve method: choose points to label at random on line
• Requires O(n) training data to find underlying classifier
Better method: binary search for transition between 0 and 1
• Requires O(log n) training data to find underlying classifier
• Exponential reduction in training data size!
Goal: find transition between 0 and 1 labels in minimum steps
Unlabeled data: labels are all 0 then all 1 (left to right)
0 0 0 0 0 1 1 1 1 1
4. Example: collaborative filtering
• Baseline questionnaires:
– Random: m movies randomly
– Most Popular Movies: m most
frequently rated movies
• Most popular movies is not better
than random design!
• Popular movies rated highly by all
users; do not discriminate tastes
[Yu et al. 2006]
• Users usually rate only a few movies; ratings “expensive”
• Which movies do you show users to best extrapolate
movie preferences?
• Also known as questionnaire design
5. Example: Sequencing genomes
• What genome should be
sequenced next?
• Criteria for selection?
• Optimal species to detect
phenomena of interest
[McAuliffe et al., 2004]
6. Example: Improving cell culture
conditions
• Grow cell culture in bioreactor
– Concentrations of various things
• Glucose, Lactate, Ammonia, Asparagine, etc.
– Temperature, etc.
• Task: Find optimal growing conditions for a cell
culture
• Optimal: Perform as few time consuming
experiments as possible to find the optimal
conditions.
7. Topics for today
• Introduction: Information theory
• Active learning
– Query by committee
– Uncertainty sampling
– Information-based loss functions
• Optimal experimental design
– A-optimal design
– D-optimal design
– E-optimal design
• Non-linear optimal experimental design
– Sequential experimental design
– Bayesian experimental design
– Maximin experimental design
• Summary
8. Topics for today
• Introduction: Information theory
• Active learning
– Query by committee
– Uncertainty sampling
– Information-based loss functions
• Optimal experimental design
– A-optimal design
– D-optimal design
– E-optimal design
• Non-linear optimal experimental design
– Sequential experimental design
– Bayesian experimental design
– Maximin experimental design
• Summary
9. Entropy Function
• A measure of information in
random event X with possible
outcomes {x1,…,xn}
• Comments on entropy function:
– Entropy of an event is zero when
the outcome is known
– Entropy is maximal when all
outcomes are equally likely
• The average minimum number
of yes/no questions to answer
some question
– Related to binary search
H(x) = - Si p(xi) log2 p(xi)
[Shannon, 1948]
10. Kullback Leibler divergence
• P = true distribution;
• Q = alternative distribution that is used to encode data
• KL divergence is the expected extra message length per
datum that must be transmitted using Q
• Measures how different the two distributions are
DKL(P || Q) = Si P(xi) log (P(xi)/Q(xi))
= Si P(xi) log P(xi) – Si P(xi) log Q(xi)
= H(P,Q) - H(P)
= Cross-entropy - entropy
11. KL divergence properties
• Non-negative: D(P||Q) ≥ 0
• Divergence 0 if and only if P and Q are equal:
– D(P||Q) = 0 iff P = Q
• Non-symmetric: D(P||Q) ≠ D(Q||P)
• Does not satisfy triangle inequality
– D(P||Q) ≤ D(P||R) + D(R||Q)
12. KL divergence properties
• Non-negative: D(P||Q) ≥ 0
• Divergence 0 if and only if P and Q are equal:
– D(P||Q) = 0 iff P = Q
• Non-symmetric: D(P||Q) ≠ D(Q||P)
• Does not satisfy triangle inequality
– D(P||Q) ≤ D(P||R) + D(R||Q)
Not a distance
metric
13. KL divergence as gain
• Modeling the KL divergence of the posteriors measures
the amount of information gain expected from query
(where x’ is the queried data):
• Goal: choose a query that maximizes the KL divergence
between posterior and prior
• Basic idea: largest KL divergence between updated
posterior probability and the current posterior probability
represents largest gain
D( p(q | x, x’) || p(q | x))
14. Topics for today
• Introduction: information theory
• Active learning
– Query by committee
– Uncertainty sampling
– Information-based loss functions
• Optimal experimental design
– A-optimal design
– D-optimal design
– E-optimal design
• Non-linear optimal experimental design
– Sequential experimental design
– Bayesian experimental design
– Maximin experimental design
• Summary
15. Active learning
• Setup: Given existing knowledge, want to choose where
to collect more data
– Access to cheap unlabelled points
– Make a query to obtain expensive label
– Want to find labels that are “informative”
• Output: Classifier / predictor trained on less labeled data
• Similar to “active learning” in classrooms
– Students ask questions, receive a response, and ask further
questions
– vs. passive learning: student just listens to lecturer
• This lecture covers:
– how to measure the value of data
– algorithms to choose the data
16. Example: Gene expression and
Cancer classification
• Active learning takes 31 points to achieve same
accuracy as passive learning with 174
Liu 2004
17. Reminder: Risk Function
• Given an estimation procedure / decision function d
• Frequentist risk given the true parameter q is expected
loss after seeing new data.
• Bayesian integrated risk given a prior is defined as
posterior expected loss:
• Loss includes cost of query, prediction error, etc.
18. Decision theoretic setup
• Active learner
– Decision d includes which data point q to query
• also includes prediction / estimate / etc.
– Receives a response from an oracle
• Response updates parameters q of the model
• Make next decision as to which point to query
based on new parameters
• Query selected should minimize risk
19. Active Learning
• Some computational considerations:
– May be many queries to calculate risk for
• Subsample points
• Probability far from the true min decreases exponentially
– May not be easy to calculate risk R
• Two heuristic methods for reducing risk:
– Select “most uncertain” data point given model and
parameters
– Select “most informative” data point to optimize
expected gain
20. Uncertainty Sampling
• Query the event that the current classifier is
most uncertain about
• Needs measure of uncertainty, probabilistic
model for prediction
• Examples:
– Entropy
– Least confident predicted label
– Euclidean distance (e.g. point closest to margin
in SVM)
21. Example: Gene expression and
Cancer classification
• Data: Cancerous Lung tissue samples
– “Cheap” unlabelled data
• gene expression profiles from Affymatrix microarray
– Labeled data:
• 0-1 label for adenocarcinoma or malignant pleural
mesothelioma
• Method:
– Linear SVM
– Measure of uncertainty
• distance to SVM hyperplane
Liu 2004
22. Example: Gene expression and
Cancer classification
• Active learning takes 31 points to achieve same
accuracy as passive learning with 174
Liu 2004
25. Query by Committee
• Point on max-margin hyperplane does not
reduce the number of valid hypotheses by much
26. Query by Committee
• Queries an example based on the degree of
disagreement between committee of classifiers
27. Query by Committee
• Prior distribution over classifiers/hypotheses
• Sample a set of classifiers from distribution
• Natural for ensemble methods which are already
samples
– Random forests, Bagged classifiers, etc.
• Measures of disagreement
– Entropy of predicted responses
– KL-divergence of predictive distributions
28. Query by Committee Application
• Used naïve Bayes model for text classification in a
Bayesian learning setting (20 Newsgroups dataset)
[McCallum &
Nigam, 1998]
29. Information-based Loss Function
• Previous methods looked at uncertainty at a single point
– Does not look at whether you can actually reduce uncertainty or
if adding the point makes a difference in the model
• Want to model notions of information gained
– Maximize KL divergence between posterior and prior
– Maximize reduction in model entropy between posterior and
prior (reduce number of bits required to describe distribution)
• All of these can be extended to optimal design
algorithms
• Must decide how to handle uncertainty about query
response, model parameters
[MacKay, 1992]
30. Other active learning strategies
• Expected model change
– Choose data point that imparts greatest change to model
• Variance reduction / Fisher Information maximization
– Choose data point that minimizes error in parameter estimation
– Will say more in design of experiments
• Density weighted methods
– Previous strategies use query point and distribution over models
– Take into account data distribution in surrogate for risk.
31. Active learning warning
• Choice of data is only as good as the model itself
• Assume a linear model, then two data points are sufficient
• What happens when data are not linear?
33. Topics for today
• Introduction: information theory
• Active learning
– Query by committee
– Uncertainty sampling
– Information-based loss functions
• Optimal experimental design
– A-optimal design
– D-optimal design
– E-optimal design
• Non-linear optimal experimental design
– Sequential experimental design
– Bayesian experimental design
– Maximin experimental design
• Summary
34. Experimental Design
• Many considerations in designing an experiment
– Dealing with confounders
– Feasibility
– Choice of variables to measure
– Size of experiment ( # of data points )
– Conduction of experiment
– Choice of interventions/queries to make
– Etc.
35. Experimental Design
• Many considerations in designing an experiment
– Dealing with confounders
– Feasibility
– Choice of variables to measure
– Size of experiment ( # of data points )
– Conduction of experiment
– Choice of interventions/queries to make
– Etc.
• We will only look at one of them
36. What is optimal experimental
design?
• Previous slides give
– General formal definition of the problem to be solved
(which may be not tractable or not worth the effort)
– heuristics to choose data
• Empirically good performance but
– Not that much theory on how good the heuristics are
• Optimal experimental design gives
– theoretical credence to choosing a set of points
– for a specific set of assumptions and objectives
• Theory is good when you only get to run (a series of)
experiments once
37. Optimal Experimental Design
• Given a model M with parameters ,
– What queries are maximally informative
i.e. will yield the best estimate of
• “Best” minimizes variance of estimate
– Equivalently, maximizes the Fisher Information
• Linear models
– Optimal design does not depend on !
• Non-linear models
– Depends on , but can Taylor expand to linear model
38. Optimal Experimental Design
• Assumptions
– Linear model:
– Finite set of queries {F1, …, Fs} that x.j can take.
• Each Fi is set of interventions/measurements
(e.g. F1 =10ml of dopamine on mouse with mutant gene G)
• mi = # responses for query Fi
– Usual assumptions for linear least squares regression
• Covariance of mle:
39. Relaxed Experimental Design
• Hard combinatorial problem (FTMF)-1
• The relaxed problem allows wi ≥ 0, Σi wi = 1
• Error covariance matrix becomes (FTWF)-1
• (FTWF)-1 = inverted Hessian of the squared error
• or inverted Fisher information matrix
• minimizing (FTWF)-1 reduces model error,
• or equivalently maximize information gain
Boolean problem Relaxed problem
N = 3
40. Experimental Design: Types
• Want to minimize (FTWF)-1 ; need a scalar objective
– A-optimal (average) design minimizes trace(FTWF)-1
– D-optimal (determinant) design minimizes log det(FTWF)-1
– E-optimal (extreme) design minimizes max eigenvalue of (FTWF)-1
– Alphabet soup of other criteria (C-, G-, L-, V-,etc)
• All of these design methods can use convex optimization
techniques
• Computational complexity polynomial for semi-definite
programs (A- and E-optimal designs)
[Boyd & Vandenberghe, 2004]
41. A-Optimal Design
• A-optimal design minimizes the trace of (FTWF)-1
– Minimizing trace (sum of diagonal elements) essentially
chooses maximally independent columns
(small correlations between interventions)
• Tends to choose points on the border of the dataset
Example: mixture of four Gaussians
[Yu et al., 2006]
42. A-Optimal Design
• A-optimal design minimizes the trace of (FTWF)-1
• Can be cast as a semi-definite program
Example: 20 candidate datapoints, minimal ellipsoid that
contains all points
[Boyd & Vandenberghe, 2004]
43. D-Optimal design
• D-optimal design minimizes log determinant of (FTWF)-1
• Equivalent to
– choosing the confidence ellipsoid with minimum volume
(“most powerful” hypothesis test in some sense)
– Minimizing entropy of the estimated parameters
• Most commonly used optimal design
[Boyd & Vandenberghe, 2004]
44. E-Optimal design
• E-optimal design minimizes largest eigenvalue of (FTWF)-1
• Minimax procedure
• Can be cast as a semi-definite program
• Minimizes the diameter of the confidence ellipsoid
[Boyd & Vandenberghe, 2004]
46. Optimal Design
[Boyd & Vandenberghe, 2004]
• Extract the integral solution from the relaxed problem
• Can simply round the weights to closest multiple of 1/m
– m_j = round(m * w_i), i = 1, …, p
47. Extensions to optimal design
• Cost associated with each experiment
– Add a cost vector, constrain total cost by a budget B
(one additional constraint)
• Multiple samples from single experiment
– Each xi is now a matrix instead of a vector
– Optimization (covariance matrix) is identical to before
• Time profile of process
– Add time dimension to each experiment vector xi
[Boyd & Vandenberghe, 2004]
[Atkinson, 1996]
48. Topics for today
• Introduction: information theory
• Active learning
– Query by committee
– Uncertainty sampling
– Information-based loss functions
• Optimal experimental design
– A-optimal design
– D-optimal design
– E-optimal design
• Non-linear optimal experimental design
– Sequential experimental design
– Bayesian experimental design
– Maximin experimental design
• Summary
49. Optimal design in non-linear models
• Given a non-linear model y = g(x,q)
• Model is described by a Taylor expansion around a
– aj( x, ) = ∂ g(x,q) / ∂ qj, evaluated at
• Maximization of Fisher information matrix is now the
same as the linear model
• Yields a locally optimal design, optimal for the particular
value of q
• Yields no information on the (lack of) fit of the model
[Atkinson, 1996]
50. Optimal design in non-linear models
• Problem: parameter value q, used to choose
experiments F, is unknown
• Three general techniques to address this problem, useful
for many possible notions of “gain”
• Sequential experimental design: iterate between
choosing experiment x and updating parameter
estimates q
• Bayesian experimental design: put a prior distribution
on parameter q, choose a best data x
• Maximin experimental design: assume worst case
scenario for parameter q, choose a best data x
51. Sequential Experimental Design
• Model parameter values are not known exactly
• Multiple experiments are possible
• Learner assumes that only one experiment is possible;
makes best guess as to optimal data point for given q
• Each iteration:
– Select data point to collect via experimental design using q
– Single experiment performed
– Model parameters q‘ are updated based on all data x’
• Similar idea to Expectation Maximization
[Pronzato & Thierry, 2000]
52. Bayesian Experimental Design
• Effective when knowledge of distribution for q is available
• Example: KL divergence between posterior and prior
– ∫x argmaxw ∫qQ D( p(q |w,x) || p(q )) p(x |w) dq dx
• Example: A-optimal design:
– ∫x argminw ∫qQ tr(FTWF)-1p(q | w,x)p(x |w) dq dx
• Often sensitive to distributions
[Chaloner & Verdinelli, 1995]
53. Maximin Experimental Design
• Maximize the minimum gain
• Example: D-optimal design:
– argmax minqQ I( ) = argminw maxqQ log det (FTWF)-1
• Example: KL divergence:
– argmaxw minqQ D(p(q |w,x) || p(q))
• Does not require prior/empirical knowledge
• Good when very little is known about distribution of
parameter q
[Pronzato & Walter, 1988]
54. Topics for today
• Introduction: information theory
• Active learning
– Query by committee
– Uncertainty sampling
– Information-based loss functions
• Optimal experimental design
– A-optimal design
– D-optimal design
– E-optimal design
• Non-linear optimal experimental design
– Sequential experimental design
– Bayesian experimental design
– Maximin experimental design
• Response surface models
• Summary
55. Response Surface Methods
• Estimate effects of local changes to the interventions
(queries)
– In particular, estimate how to maximize the response
• Applications:
– Find optimal conditions for growing cell cultures
– Develop robust process for chemical manufacturing
• Procedure for maximizing response
– Given a set of datapoints, interpolate a local surface
(This local surface is called the “response surface”)
• Typically use a quadratic polynomial to obtain a Hessian
– Hill-climb or take Newton step on the response surface to find
next x
– Use next x to interpolate subsequent response surface
56. Response Surface Modeling
• Goal: Approximate the function f(c) = score(minimize(c))
• 1. Fit a smoothed response surface to the data points
• 2. Minimize response surface to find new candidate
• 3. Use method to find nearby local minimum of score function
• 4. Add candidate to data points
• 5. Re-fit surface, repeat
-140
-130
-120
-110
-100
-90
-80
Energy
score
[Blum, unpublished]
57. Related ML Problems
• Reinforcement Learning
– Interaction with the world
– Notion of accumulating rewards
• Semi-supervised learning
– Use the unlabelled data itself, not just as pool of queries
• Core sets, active sets
– Select small dataset gives nearly same performance as full
dataset. Fast computation for large scale problems
58. Summary
• Active learning
– Query by committee
– Uncertainty sampling
– Information-based loss functions
• Optimal experimental design
– A-optimal design
– D-optimal design
– E-optimal design
• Non-linear optimal experimental design
• Sequential experimental design
• Bayesian experimental design
• Maximin experimental design
• Response surface methods
Single-shot experiment;
Little known of parameter
distribution (range known)
Single-shot experiment;
Some idea of parameter distribution
Multiple-shot experiments;
Little known of parameter
Distribution over parameter;
Probabilistic; sequential
Predictive distribution on pt;
Distance function; sequential
Maximize gain; sequential
Minimize trace of information matrix
Minimize log det of information matrix
Minimize largest eigenvalue of information matrix
Sequential experiments for optimization