Markov Chain Monte Carlo (MCMC) methods use Markov chains to sample from probability distributions for use in Monte Carlo simulations. The Metropolis-Hastings algorithm proposes transitions to new states in the chain and either accepts or rejects those states based on a probability calculation, allowing it to sample from complex, high-dimensional distributions. The Gibbs sampler is a special case of MCMC where each variable is updated conditional on the current values of the other variables, ensuring all proposed moves are accepted. These MCMC methods allow approximating integrals that are difficult to compute directly.
Why should you care about Markov Chain Monte Carlo methods?
→ They are in the list of "Top 10 Algorithms of 20th Century"
→ They allow you to make inference with Bayesian Networks
→ They are used everywhere in Machine Learning and Statistics
Markov Chain Monte Carlo methods are a class of algorithms used to sample from complicated distributions. Typically, this is the case of posterior distributions in Bayesian Networks (Belief Networks).
These slides cover the following topics.
→ Motivation and Practical Examples (Bayesian Networks)
→ Basic Principles of MCMC
→ Gibbs Sampling
→ Metropolis–Hastings
→ Hamiltonian Monte Carlo
→ Reversible-Jump Markov Chain Monte Carlo
Those are the slides for my Master course on Monte Carlo Statistical Methods given in conjunction with the Monte Carlo Statistical Methods book with George Casella.
Why should you care about Markov Chain Monte Carlo methods?
→ They are in the list of "Top 10 Algorithms of 20th Century"
→ They allow you to make inference with Bayesian Networks
→ They are used everywhere in Machine Learning and Statistics
Markov Chain Monte Carlo methods are a class of algorithms used to sample from complicated distributions. Typically, this is the case of posterior distributions in Bayesian Networks (Belief Networks).
These slides cover the following topics.
→ Motivation and Practical Examples (Bayesian Networks)
→ Basic Principles of MCMC
→ Gibbs Sampling
→ Metropolis–Hastings
→ Hamiltonian Monte Carlo
→ Reversible-Jump Markov Chain Monte Carlo
Those are the slides for my Master course on Monte Carlo Statistical Methods given in conjunction with the Monte Carlo Statistical Methods book with George Casella.
Ordinal Regression and Machine Learning: Applications, Methods, MetricsFrancesco Casalegno
What do movie recommender systems, disease progression evaluation, and sovereign credit ranking have in common?
→ ordinal regression sits between classification and regression
→ target values are categorical and discrete, but ordered
→ many challenges to face when training and evaluating models
What will you find in this presentation?
→ real life, clear examples of ordinal regression you see everyday
→ learning to rank: predict user preferences and items relevance
→ best solution methods: naïve, binary decomposition, threshold
→ how to measure performance: understand & choose metrics
My data are incomplete and noisy: Information-reduction statistical methods f...Umberto Picchini
We review parameter inference for stochastic modelling in complex scenario, such as bad parameters initialization and near-chaotic dynamics. We show how state-of-art methods for state-space models can fail while, in some situations, reducing data to summary statistics (information reduction) enables robust estimation. Wood's synthetic likelihoods method is reviewed and the lecture closes with an example of approximate Bayesian computation methodology.
Accompanying code is available at https://github.com/umbertopicchini/pomp-ricker and https://github.com/umbertopicchini/abc_g-and-k
Readership lecture given at Lund University on 7 June 2016. The lecture is of popular science nature hence mathematical detail is kept to a minimum. However numerous links and references are offered for further reading.
International Conference on Monte Carlo techniques
Closing conference of thematic cycle
Paris July 5-8th 2016
Campus les cordeliers
Jere Koskela's slides
A 3hrs intro lecture to Approximate Bayesian Computation (ABC), given as part of a PhD course at Lund University, February 2016. For sample codes see http://www.maths.lu.se/kurshemsida/phd-course-fms020f-nams002-statistical-inference-for-partially-observed-stochastic-processes/
Rao-Blackwellisation schemes for accelerating Metropolis-Hastings algorithmsChristian Robert
Aggregate of three different papers on Rao-Blackwellisation, from Casella & Robert (1996), to Douc & Robert (2010), to Banterle et al. (2015), presented during an OxWaSP workshop on MCMC methods, Warwick, Nov 20, 2015
Inference for stochastic differential equations via approximate Bayesian comp...Umberto Picchini
Despite the title the methods are appropriate for more general dynamical models (including state-space models). Presentation given at Nordstat 2012, Umeå. Relevant research paper at http://arxiv.org/abs/1204.5459 and software code at https://sourceforge.net/projects/abc-sde/
Introduction about Monte Carlo Methods, lecture given at Technical University of Kaiserslautern 2014.
There are many situations where Monte Carlo Methods are useful to solve data science problems
Ordinal Regression and Machine Learning: Applications, Methods, MetricsFrancesco Casalegno
What do movie recommender systems, disease progression evaluation, and sovereign credit ranking have in common?
→ ordinal regression sits between classification and regression
→ target values are categorical and discrete, but ordered
→ many challenges to face when training and evaluating models
What will you find in this presentation?
→ real life, clear examples of ordinal regression you see everyday
→ learning to rank: predict user preferences and items relevance
→ best solution methods: naïve, binary decomposition, threshold
→ how to measure performance: understand & choose metrics
My data are incomplete and noisy: Information-reduction statistical methods f...Umberto Picchini
We review parameter inference for stochastic modelling in complex scenario, such as bad parameters initialization and near-chaotic dynamics. We show how state-of-art methods for state-space models can fail while, in some situations, reducing data to summary statistics (information reduction) enables robust estimation. Wood's synthetic likelihoods method is reviewed and the lecture closes with an example of approximate Bayesian computation methodology.
Accompanying code is available at https://github.com/umbertopicchini/pomp-ricker and https://github.com/umbertopicchini/abc_g-and-k
Readership lecture given at Lund University on 7 June 2016. The lecture is of popular science nature hence mathematical detail is kept to a minimum. However numerous links and references are offered for further reading.
International Conference on Monte Carlo techniques
Closing conference of thematic cycle
Paris July 5-8th 2016
Campus les cordeliers
Jere Koskela's slides
A 3hrs intro lecture to Approximate Bayesian Computation (ABC), given as part of a PhD course at Lund University, February 2016. For sample codes see http://www.maths.lu.se/kurshemsida/phd-course-fms020f-nams002-statistical-inference-for-partially-observed-stochastic-processes/
Rao-Blackwellisation schemes for accelerating Metropolis-Hastings algorithmsChristian Robert
Aggregate of three different papers on Rao-Blackwellisation, from Casella & Robert (1996), to Douc & Robert (2010), to Banterle et al. (2015), presented during an OxWaSP workshop on MCMC methods, Warwick, Nov 20, 2015
Inference for stochastic differential equations via approximate Bayesian comp...Umberto Picchini
Despite the title the methods are appropriate for more general dynamical models (including state-space models). Presentation given at Nordstat 2012, Umeå. Relevant research paper at http://arxiv.org/abs/1204.5459 and software code at https://sourceforge.net/projects/abc-sde/
Introduction about Monte Carlo Methods, lecture given at Technical University of Kaiserslautern 2014.
There are many situations where Monte Carlo Methods are useful to solve data science problems
This 10 hours class is intended to give students the basis to empirically solve statistical problems. Talk 1 serves as an introduction to the statistical software R, and presents how to calculate basic measures such as mean, variance, correlation and gini index. Talk 2 shows how the central limit theorem and the law of the large numbers work empirically. Talk 3 presents the point estimate, the confidence interval and the hypothesis test for the most important parameters. Talk 4 introduces to the linear regression model and Talk 5 to the bootstrap world. Talk 5 also presents an easy example of a markov chains.
All the talks are supported by script codes, in R language.
This 10 hours class is intended to give students the basis to empirically solve statistical problems. Talk 1 serves as an introduction to the statistical software R, and presents how to calculate basic measures such as mean, variance, correlation and gini index. Talk 2 shows how the central limit theorem and the law of the large numbers work empirically. Talk 3 presents the point estimate, the confidence interval and the hypothesis test for the most important parameters. Talk 4 introduces to the linear regression model and Talk 5 to the bootstrap world. Talk 5 also presents an easy example of a markov chains.
All the talks are supported by script codes, in R language.
Salas, V. (2024) "John of St. Thomas (Poinsot) on the Science of Sacred Theol...Studia Poinsotiana
I Introduction
II Subalternation and Theology
III Theology and Dogmatic Declarations
IV The Mixed Principles of Theology
V Virtual Revelation: The Unity of Theology
VI Theology as a Natural Science
VII Theology’s Certitude
VIII Conclusion
Notes
Bibliography
All the contents are fully attributable to the author, Doctor Victor Salas. Should you wish to get this text republished, get in touch with the author or the editorial committee of the Studia Poinsotiana. Insofar as possible, we will be happy to broker your contact.
This presentation explores a brief idea about the structural and functional attributes of nucleotides, the structure and function of genetic materials along with the impact of UV rays and pH upon them.
Slide 1: Title Slide
Extrachromosomal Inheritance
Slide 2: Introduction to Extrachromosomal Inheritance
Definition: Extrachromosomal inheritance refers to the transmission of genetic material that is not found within the nucleus.
Key Components: Involves genes located in mitochondria, chloroplasts, and plasmids.
Slide 3: Mitochondrial Inheritance
Mitochondria: Organelles responsible for energy production.
Mitochondrial DNA (mtDNA): Circular DNA molecule found in mitochondria.
Inheritance Pattern: Maternally inherited, meaning it is passed from mothers to all their offspring.
Diseases: Examples include Leber’s hereditary optic neuropathy (LHON) and mitochondrial myopathy.
Slide 4: Chloroplast Inheritance
Chloroplasts: Organelles responsible for photosynthesis in plants.
Chloroplast DNA (cpDNA): Circular DNA molecule found in chloroplasts.
Inheritance Pattern: Often maternally inherited in most plants, but can vary in some species.
Examples: Variegation in plants, where leaf color patterns are determined by chloroplast DNA.
Slide 5: Plasmid Inheritance
Plasmids: Small, circular DNA molecules found in bacteria and some eukaryotes.
Features: Can carry antibiotic resistance genes and can be transferred between cells through processes like conjugation.
Significance: Important in biotechnology for gene cloning and genetic engineering.
Slide 6: Mechanisms of Extrachromosomal Inheritance
Non-Mendelian Patterns: Do not follow Mendel’s laws of inheritance.
Cytoplasmic Segregation: During cell division, organelles like mitochondria and chloroplasts are randomly distributed to daughter cells.
Heteroplasmy: Presence of more than one type of organellar genome within a cell, leading to variation in expression.
Slide 7: Examples of Extrachromosomal Inheritance
Four O’clock Plant (Mirabilis jalapa): Shows variegated leaves due to different cpDNA in leaf cells.
Petite Mutants in Yeast: Result from mutations in mitochondrial DNA affecting respiration.
Slide 8: Importance of Extrachromosomal Inheritance
Evolution: Provides insight into the evolution of eukaryotic cells.
Medicine: Understanding mitochondrial inheritance helps in diagnosing and treating mitochondrial diseases.
Agriculture: Chloroplast inheritance can be used in plant breeding and genetic modification.
Slide 9: Recent Research and Advances
Gene Editing: Techniques like CRISPR-Cas9 are being used to edit mitochondrial and chloroplast DNA.
Therapies: Development of mitochondrial replacement therapy (MRT) for preventing mitochondrial diseases.
Slide 10: Conclusion
Summary: Extrachromosomal inheritance involves the transmission of genetic material outside the nucleus and plays a crucial role in genetics, medicine, and biotechnology.
Future Directions: Continued research and technological advancements hold promise for new treatments and applications.
Slide 11: Questions and Discussion
Invite Audience: Open the floor for any questions or further discussion on the topic.
What is greenhouse gasses and how many gasses are there to affect the Earth.moosaasad1975
What are greenhouse gasses how they affect the earth and its environment what is the future of the environment and earth how the weather and the climate effects.
THE IMPORTANCE OF MARTIAN ATMOSPHERE SAMPLE RETURN.Sérgio Sacani
The return of a sample of near-surface atmosphere from Mars would facilitate answers to several first-order science questions surrounding the formation and evolution of the planet. One of the important aspects of terrestrial planet formation in general is the role that primary atmospheres played in influencing the chemistry and structure of the planets and their antecedents. Studies of the martian atmosphere can be used to investigate the role of a primary atmosphere in its history. Atmosphere samples would also inform our understanding of the near-surface chemistry of the planet, and ultimately the prospects for life. High-precision isotopic analyses of constituent gases are needed to address these questions, requiring that the analyses are made on returned samples rather than in situ.
Deep Behavioral Phenotyping in Systems Neuroscience for Functional Atlasing a...Ana Luísa Pinho
Functional Magnetic Resonance Imaging (fMRI) provides means to characterize brain activations in response to behavior. However, cognitive neuroscience has been limited to group-level effects referring to the performance of specific tasks. To obtain the functional profile of elementary cognitive mechanisms, the combination of brain responses to many tasks is required. Yet, to date, both structural atlases and parcellation-based activations do not fully account for cognitive function and still present several limitations. Further, they do not adapt overall to individual characteristics. In this talk, I will give an account of deep-behavioral phenotyping strategies, namely data-driven methods in large task-fMRI datasets, to optimize functional brain-data collection and improve inference of effects-of-interest related to mental processes. Key to this approach is the employment of fast multi-functional paradigms rich on features that can be well parametrized and, consequently, facilitate the creation of psycho-physiological constructs to be modelled with imaging data. Particular emphasis will be given to music stimuli when studying high-order cognitive mechanisms, due to their ecological nature and quality to enable complex behavior compounded by discrete entities. I will also discuss how deep-behavioral phenotyping and individualized models applied to neuroimaging data can better account for the subject-specific organization of domain-general cognitive systems in the human brain. Finally, the accumulation of functional brain signatures brings the possibility to clarify relationships among tasks and create a univocal link between brain systems and mental functions through: (1) the development of ontologies proposing an organization of cognitive processes; and (2) brain-network taxonomies describing functional specialization. To this end, tools to improve commensurability in cognitive science are necessary, such as public repositories, ontology-based platforms and automated meta-analysis tools. I will thus discuss some brain-atlasing resources currently under development, and their applicability in cognitive as well as clinical neuroscience.
(May 29th, 2024) Advancements in Intravital Microscopy- Insights for Preclini...Scintica Instrumentation
Intravital microscopy (IVM) is a powerful tool utilized to study cellular behavior over time and space in vivo. Much of our understanding of cell biology has been accomplished using various in vitro and ex vivo methods; however, these studies do not necessarily reflect the natural dynamics of biological processes. Unlike traditional cell culture or fixed tissue imaging, IVM allows for the ultra-fast high-resolution imaging of cellular processes over time and space and were studied in its natural environment. Real-time visualization of biological processes in the context of an intact organism helps maintain physiological relevance and provide insights into the progression of disease, response to treatments or developmental processes.
In this webinar we give an overview of advanced applications of the IVM system in preclinical research. IVIM technology is a provider of all-in-one intravital microscopy systems and solutions optimized for in vivo imaging of live animal models at sub-micron resolution. The system’s unique features and user-friendly software enables researchers to probe fast dynamic biological processes such as immune cell tracking, cell-cell interaction as well as vascularization and tumor metastasis with exceptional detail. This webinar will also give an overview of IVM being utilized in drug development, offering a view into the intricate interaction between drugs/nanoparticles and tissues in vivo and allows for the evaluation of therapeutic intervention in a variety of tissues and organs. This interdisciplinary collaboration continues to drive the advancements of novel therapeutic strategies.
DERIVATION OF MODIFIED BERNOULLI EQUATION WITH VISCOUS EFFECTS AND TERMINAL V...Wasswaderrick3
In this book, we use conservation of energy techniques on a fluid element to derive the Modified Bernoulli equation of flow with viscous or friction effects. We derive the general equation of flow/ velocity and then from this we derive the Pouiselle flow equation, the transition flow equation and the turbulent flow equation. In the situations where there are no viscous effects , the equation reduces to the Bernoulli equation. From experimental results, we are able to include other terms in the Bernoulli equation. We also look at cases where pressure gradients exist. We use the Modified Bernoulli equation to derive equations of flow rate for pipes of different cross sectional areas connected together. We also extend our techniques of energy conservation to a sphere falling in a viscous medium under the effect of gravity. We demonstrate Stokes equation of terminal velocity and turbulent flow equation. We look at a way of calculating the time taken for a body to fall in a viscous medium. We also look at the general equation of terminal velocity.
Toxic effects of heavy metals : Lead and Arsenicsanjana502982
Heavy metals are naturally occuring metallic chemical elements that have relatively high density, and are toxic at even low concentrations. All toxic metals are termed as heavy metals irrespective of their atomic mass and density, eg. arsenic, lead, mercury, cadmium, thallium, chromium, etc.
Professional air quality monitoring systems provide immediate, on-site data for analysis, compliance, and decision-making.
Monitor common gases, weather parameters, particulates.
2. Monte Carlo method: a figure square
The value is unknown.
Let’s sample a random value (r.v.) :
, : i.i.d. as flat 0,1
1 ,
0 ,
x y
x y
x y
Clever notation: , I x y
i.i.d. is “Identically Independently Distributed”
Expectation of : E S S
1
μ
1
3. Monte Carlo method: efficiency
Large Numbers Law:
1
1
m
m i
i
S S
m
Central Limit Theorem:
1
0,var mS S N
m
Variance 2
var E E , also notated as 2
.
4. Monte Carlo Integration
We are evaluating D
I f x dx. D is domain of f x or its subset.
We can sample r.v. ix D: ix are i.i.d. uniformly in D:
1
i
D
E f x f x dx I
D
.
The Monte Carlo estimation:
1
m
m i
i
D
I f x
m
,
0,var m D
D
I I N f x
m
Advantage:
o The multiplier
1
2
m does not depend on the space dimension.
Disadvantage:
o a lot of samples are spent in the area where f x is small;
o the variation value varD f x that determine convergence time can be large.
D
f x
5. Monte Carlo importance integration
We are evaluating D
I f x dx
Let’s sample ix D from a “trial” distribution g x that “looks like” f x and
0 0 f x g x . ix i.i.d. in D as g x that “resembles” f x
Thus
i
g
i D D
f x f x
E g x dx f x dx
g x g x
.
MC evaluation:
1
1
m
i
m
i i
f x
I
m g x
;
1
0,varm D
f x
I I N
g xm
“More uniform” means “better”.
6. Another example of importance integration
We are evaluating
E h x h x x dx, where x is a distribution,
e.g. 1x dx
sample ix from a distribution .g so that 0 0 x g x
Importance weight i i iw x g x ;
1g
x
E w x g x dx
g x
1 1 1 1
1 1m m m m
i
m i i i i i i
i i i ii
x
h x w x h x w x h x w x
m g x m
1 1 1
1 m m m
m i i i i i
i i i
w x h x w x h x w x
m
Sampling from x :
1
1 m
m i
i
h x
m
7. Rejection sampling (Von Neumann, 1951)
We have a distribution x and
we want to sample from it.
We are able to calculate ( ) f x c x for x. Any c.
We are able to sample , : g x M Mg x f x .
Thus, we can sample x :
Draw a value x from g x .
Accept the value x with the probability f x Mg x .
|
c x c
P accept P accept x P x dx g x dx
Mg x M
|
|
P accept x P x c x M
P x accept g x x
P accept Mg x c
g x
x
f x
8. Metropolis-Hastings algorithm (1953,1970)
We want to be able to draw ( )i
x from a distribution
x . We know how to compute the value of a function f x so that f x x at
each point and we are able to draw x from |T x y
(instrumental distribution, transition kernel).
Let’s denote the i-th step result as ( )i
x .
Draw ( )i
y from ( )
| i
T y x . ( )
| i
T y x is flat in pure
Metropolis. It is an analog of g in importance sampling.
Transition probability
( ) ( ) ( )
( ) ( )
( ) ( ) ( )
|
| min 1,
|
i i i
i i
i i i
T x y f y
y x
T y x f x
.
The new value is accepted ( 1) ( )
i i
x y with probability
( ) ( )
| i i
y x . Otherwise, it is rejected ( 1) ( )
i i
x x .
( 1)i
x ( )i
x
f x
T x
9. Why does it work: the local balance
Let’s show that if x is already distributed as x f x , then the MH algorithm
keeps the distribution.
Local balance condition for two points x and y :
| | | | f x T y x y x f y T x y x y . Let’s check it:
|
| | | min 1,
|
min | , | | |
T x y f y
f x T y x y x f x T y x
T y x f x
T y x f x T x y f y f y T x y x y
The balance is stable:
| | f x T y x y x is the flow from x to y and
| | f y T x y x y is the flow from y to x.
The stable local balance is enough (BTW, it is not a necessary condition).
10. Markov chains, Maximization, Simulated
Annealing
ix created as described above is a Markov chain (MC) with transition kernel
( 1) ( ) ( 1) ( )
| |
i i i i
x x T x x . The fact that the chain has a stationary distribution and the
convergence of the chain to the distribution can be proved by the MC theory methods.
Minimization. C x is a cost (a fine).
min
exp
C x C
f x
t
.
We can characterize the transition kernel with a temperature. Then we can decrease the
temperature step-by-step (simulated annealing). MCMC and SA are very effective for
optimization since gradient methods use to be locked is a local maximum while pure
MC is extremely ineffective.
11. MCMC prior and Bayesian paradigm
( | ) ( )
( | ) ( | ) ( )
( )
posterior likelihood prior
P D M P M
P M D P D M P M
P D
here, evidence
MCMC and its variations are often used for the best model search .
Let’s can formulate some requirements for the algorithm and thus for the transition
kernel:
o We want it not to depend on the current data.
o We want to minimize the rejection rate.
So, an effective transition kernel is so that the prior ( )P M is its stationary distribution.
12. Terminology: names of relative algorithms
o MCMC, Metropolis, Metropolis-Hastings, hybrid Metropolis, configurational bias
Monte-Carlo, exchange Monte-Carlo, multigrid Monte-Carlo (MGMC), slice
sampling, RJMCMC (samples the dimensionality of the space), Multiple-Try
Metropolis, Hybrid Monte-Carlo…..
o Simulated annealing, Monte-Carlo annealing, statistical cooling, umbrella
sampling, probabilistic hill climbing, probabilistic exchange algorithm, parallel
tempering, stochastic relaxation….
o Gibbs algorithm, successive over-relaxation…
13. Gibbs Sampler (Geman and Geman, 1984)
Now, x is a k -dimensional variable 1 2, .... kx x x .
Let’s denote 1 2 1 1, .., , ,.. ,1 m m m kx x x x x x m k
On each step of the Markov Chain we choose the “current coordinate” im .
Then, we calculate the distribution ( )
| i i
i
m mf x x and draw the next value ( )
i
i
my from the
distribution.
All other coords are the same as on the previous step, ( ) ( )
i i
i i
m my x .
For such a transition kernel,
( ) ( ) ( )
( ) ( )
( ) ( ) ( )
|
| min 1, 1
|
i i i
i i
i i i
T x y f y
y x
T y x f x
.
o We have no rejects, so the procedure is very effective.
o The “temperature” decreases rather fast .
14. Inverse transform sampling (well-known)
We want to sample from the density x . We know how to calculate the inverse
function for the cumulative distribution.
Generate a random number from the 0,1 uniform distribution; call this iu .
Compute the value ix such that
ix
ix dx u
ix is the random number that is drawn from the distribution described by x .
x dx
u
1
0
ix x
, ,x x x u u u
p x x uniform u u
u
p x uniform x
x
15. Slice sampling (Neal, 2003)
Sampling of x from f x is equivalent to sampling of ,x y pairs from they area.
So, we introduce an auxiliary variable y and iterate as follows:
for a sample tx we choose ty uniformly from the interval 0, tf x
given ty we choose 1ix uniformly at random from : tx f x y
the sample of x distributed as f x is obtained by ignoring the y values.
16. Literature
Liu, J.S. (2002) Monte Carlo Strategies in Scientific Computing. Springer-Verlag, NY, Berlin, Heidelberg.
Robert, C.P. (1998) Discretization and MCMC Convergence Assessment, Springer-Verlag.
Laarhoven, van, P.M.J. and Aarts, E.H.L (1988) Simulated Annealing: Theory and Applications. Kluwer Academic
Publishers.
Geman, S and Geman, D (1984). Stochastic relaxation, Gibbs distribution and the Bayesian restoration of images. IEEE
Transactions on Pattern Analysis and Machine Intelligence. 6, 621-641.
Besag, J., Green, P., Higdon, D., and Mengersen, K. (1996) Bayesian computation and Stochastic Sytems. Statistical Science,
10, 1, 3-66.
Lawrence, C.E., Altschul, S.F., Boguski, M.S., Liu, J.S., Neuwald, A.F., and Wootton, J.C. (1993). Detecting subtle sequence
signals: a Gibbs sampling strategy for multiple alignment. Science 262, 208-214.
Sivia, D.S. (1996) Data Analysis. A Bayesian tutorial. Clarendon Press, Oxford.
Neal, Radford M. (2003) Slice Sampling. The Annals of Statistics 31(3):705-767.
http://civs.ucla.edu/MCMC/MCMC_tutorial.htm
Sheldon Ross. A First Course in Probability
Соболь И.М. Метод Монте-Карло
Sometimes, it works
Favorov, A.V., Andreewski, T.V., Sudomoina, M.A., Favorova O.O., Parmigiani, G. Ochs, M.F. (2005). A Markov chain
Monte Carlo technique for identification of combinations of allelic variants underlying complex diseases in humans. Genetics
171(4): 2113-21.
Favorov, A.V., Gelfand, M.S., Gerasimova, A.V. Ravcheev, D.A., Mironov, A.A., Makeev, V. J. (2005). A Gibbs sampler
for identification of symmetrically structured, spaced DNA motifs with improved estimation of the signal length.
Bioinformatics 21(10): 2240-2245.