Personal Information

Organization / Workplace

Saint-Malo Area, France France

Occupation

Statistician

Industry

Education

Website

xianblog.wordpress.com
Tags

bayesian statistics
abc algorithm
mcmc methods
abc
mcmc
simulation
bayesian model choice
bayes factor
monte carlo statistical methods
summary statistics
statistics
foundations
bayesian inference
sufficiency
empirical likelihood
model choice
mixture models
gibbs sampling
r
bayesian
importance sampling
metropolis hastings algorithm
approximate bayesian inference
random forests
prior selection
particles
mixtures
model
bayesian consistency
improper priors
intractable likelihood
population genetics
evidence
bridge sampling
sampling
misspecified models
wasserstein distance
hamiltonian monte carlo
rao-blackwellisation
bayesian statisics
testing statistical hypotheses
bayesian tests
bootstrap
nested
choice
cirm
likelihood-free methods
generative model
consistency
estimation
noninformative priors
bayesian nonparametrics
big data
reversibility
empirical bayes methods
unbiasedness
adaptivity
read paper
label switching
frequentist statistics
harmonic mean
warwick
inference
course
selection
dirichlet process mixture
speed of convergence
machine learning
random generation
normalising constant
ergodicity
markov chains
delayed acceptance
jeffreys prior
acceleration of mcmc algorithms
objective bayes
modelliing
snps
smc
rss
slice sampling
accept-reject algorithms
sufficiency principle
hierarchical models
bernoulli factory
julian besag
parallelisation
royal statistical society
san antonio
discussion
k-means
conjugate priors
decision theory
nested sampling
oxford
monte
carlo
core
sequential monte carlo
reverse logistic regression
normalising flows
vae
stan
pdmp
bouncy particle sampler
abc consistency
curse of dimension
concentration
abc-mcmc
short course
econometrics
luc devroye
university of warwick
ratio of uniforms
numerical integration
bayesian analysis
approximate likelihood
gaussian process
jeffreys-lindley paradox
hpd region
score function
information
posterior error rate
1000 genome project
dauphine
parametric models
exponential families
jeffreys' prior
andrew gelman
examples
kingman's coalescent
effective dimension
round table
dic
harold jeffreys
loss functions
prior determination
université paris-dauphine
p-values
testing of hypotheses
mathematical statistics
simple null hypothesis
conditionality principle
foundations of statistics
statistical inference
marginal density
maximum likelihood
prediction error
clustering
art owen
integration
latent gaussian models
maximum entropy
hammersley-clifford theorem
likelihood complexiity
time series
reversible jump
monte carlo methods
nonparametrics
amis
frequentist
prior
theory
bayes
jeffreys
importance
linear
regression
distributed inference
differential privacy
bayesian regression
privacy
mrc biostatistics unit
medical research council
university of cambridge
dirichlet process priors
chib's formula
marginalisation
non-reversible algorithms
stochastic approximation
brownian motion
restore algorithm
regeneration
diffusions
chib's approximation
university of padova
cdt
wgans
gans
bsl
chib's estimator
number of components
scoring rule
location-scale model
manifold
insufficient statistic
neural networks
partition function
noise contrastive estimation
gan
autoencoders
joint distribution
compatibility
markov processes
non-reversibility
marseille
joint statistical meeting
history of sciences
ronald fisher
annals of eugenics
j.b.s. haldane
karl pearson
francis galton
jsm 2020
eugenics
history of statistics
korean statistical society
large dimension
odes
utility
likelihood free methods
optimal design
asa
denver
iol
jsm 2019
cisea 2019
moment constraints
statistical models
poster
susceptible-infected-recovered model
nuts
no-u-turn sampler
leapfrog integrator
bayesian computational statistics
hmc
computational statistics
empirical cdf
hilbert curve
singapore
nus
jsm 2018
vancouver
partly deterministic markov process
bernstein-von-mises theorem
wales
gregynog hall
workshop
statistical learning
pkpd
mahalanobis conference
better together
poisson process
continuous time process
uniform ergodicity
imis
isaac newton institute
scalable inference
harvard university
dutch book
modelling
subjectivity
jrss
relativity
philosophy
masdoc
master
indirect inference
identifiability
method of moments
banff
non-parametrics
rkhs
birs
one model one forest
posterior normality
markov random field
abc-pmc
probability transform
fundamental theorem of simulation
readng group
ucl
london
asymptotic variance
black box method
transition kernel
mcmc algorithms
origamcmc
noisy monte carlo
reading
pseudo-marginal
lancaster
kd-tree
knn method
discretisation
infinite dimension
convergence
paradigm shift
testing
probabilistic numerics
lindley-jeffreys paradox
posterior probability
mala
random walk
langevin algorithm
reference-priors
ultimixt
cran
invariance
spatial statistics
gams
point processes
all of statistics
variance reduction
nips 2015
scalability
complex likelihood function
seattle
jsm 2015
subsampling
stopping rule
riemann integration
order statistics
non-informative prior
socks
conjudate priors
posterior mean
fiducial statistics
structural model
em algorithm
optimisation
gradient function
missing data models
asymptotic normality
mle
likelihood
cdf
glivenko-cantelli
perfect sampling
same algorithm
tempering
pygmies
admixture
buffon's needle
exam
lda
philogenies
complex models
information loss
aic
deviance
log-score
travelling salesman
simulated annealing
uniform generator
sudokus
randomness
integrated likelihood
infinite series
buffon
russian roulette
jakob bernoulli
pmcmc
bruno de finetti
pierre simon laplace
dennis lindley
thomas bayes
isba
kullback-leibler divergence
roma
garch
fisher information
detailed balance
tree pruning
u-turn
ensae
estimating equations
annals of statistics
shrinkage estimators
integration by part
normal mean estimation
likelihood ratio
statistical mathematics
neyman-pearson theory
composite null hypothesis
type ii error
critical region
type i error
r exam
ks.test
likelihood principle
khajuraho
george casella
gainesville
varanasi
paris-dauphine
efron
classics
ridge
l1 peralty
ridge regression
least squares
model selection
journal of the royal statistical society c
applied statistics
hartigan
algorithms
optimization
bayesian estimation
normal model
minimaxity
gpus
gibbs random fields
conference
independent metropolis hastings algorithm
population monte carlo
mark girolami
hamiltonian
royal statistical socie
variance
dark matter
flat universe
zero probability sets
savage-dickey ratio
measure theory
jim berger
bayesian core
texas
utsa
introducing monte carlo method with r
fruirth-schnatter
geweke
chibs method
potts model
classification
image analysis
scotland
stationarity
metropolis-hastings algorithm
society
royal
statistical
iii
yes
savagedickey
eindhoven
ratio
software
decision
tests
priors
conjugate
factor
mean
harmonic
implementation
analysis
data
jump
reversible
entropy
maximum
maxent
obayes
09
probability
objective
rjmcmc
bridge
harold
defensive
normal
metropolis-hastings
models
generalised

See more
## Presentations

(175)## Documents

(9)## Likes

(1)### Montpellier Math Colloquium

Christian Robert
•
14 years ago

Personal Information

Organization / Workplace

Saint-Malo Area, France France

Occupation

Statistician

Industry

Education

Website

xianblog.wordpress.com
Tags

bayesian statistics
abc algorithm
mcmc methods
abc
mcmc
simulation
bayesian model choice
bayes factor
monte carlo statistical methods
summary statistics
statistics
foundations
bayesian inference
sufficiency
empirical likelihood
model choice
mixture models
gibbs sampling
r
bayesian
importance sampling
metropolis hastings algorithm
approximate bayesian inference
random forests
prior selection
particles
mixtures
model
bayesian consistency
improper priors
intractable likelihood
population genetics
evidence
bridge sampling
sampling
misspecified models
wasserstein distance
hamiltonian monte carlo
rao-blackwellisation
bayesian statisics
testing statistical hypotheses
bayesian tests
bootstrap
nested
choice
cirm
likelihood-free methods
generative model
consistency
estimation
noninformative priors
bayesian nonparametrics
big data
reversibility
empirical bayes methods
unbiasedness
adaptivity
read paper
label switching
frequentist statistics
harmonic mean
warwick
inference
course
selection
dirichlet process mixture
speed of convergence
machine learning
random generation
normalising constant
ergodicity
markov chains
delayed acceptance
jeffreys prior
acceleration of mcmc algorithms
objective bayes
modelliing
snps
smc
rss
slice sampling
accept-reject algorithms
sufficiency principle
hierarchical models
bernoulli factory
julian besag
parallelisation
royal statistical society
san antonio
discussion
k-means
conjugate priors
decision theory
nested sampling
oxford
monte
carlo
core
sequential monte carlo
reverse logistic regression
normalising flows
vae
stan
pdmp
bouncy particle sampler
abc consistency
curse of dimension
concentration
abc-mcmc
short course
econometrics
luc devroye
university of warwick
ratio of uniforms
numerical integration
bayesian analysis
approximate likelihood
gaussian process
jeffreys-lindley paradox
hpd region
score function
information
posterior error rate
1000 genome project
dauphine
parametric models
exponential families
jeffreys' prior
andrew gelman
examples
kingman's coalescent
effective dimension
round table
dic
harold jeffreys
loss functions
prior determination
université paris-dauphine
p-values
testing of hypotheses
mathematical statistics
simple null hypothesis
conditionality principle
foundations of statistics
statistical inference
marginal density
maximum likelihood
prediction error
clustering
art owen
integration
latent gaussian models
maximum entropy
hammersley-clifford theorem
likelihood complexiity
time series
reversible jump
monte carlo methods
nonparametrics
amis
frequentist
prior
theory
bayes
jeffreys
importance
linear
regression
distributed inference
differential privacy
bayesian regression
privacy
mrc biostatistics unit
medical research council
university of cambridge
dirichlet process priors
chib's formula
marginalisation
non-reversible algorithms
stochastic approximation
brownian motion
restore algorithm
regeneration
diffusions
chib's approximation
university of padova
cdt
wgans
gans
bsl
chib's estimator
number of components
scoring rule
location-scale model
manifold
insufficient statistic
neural networks
partition function
noise contrastive estimation
gan
autoencoders
joint distribution
compatibility
markov processes
non-reversibility
marseille
joint statistical meeting
history of sciences
ronald fisher
annals of eugenics
j.b.s. haldane
karl pearson
francis galton
jsm 2020
eugenics
history of statistics
korean statistical society
large dimension
odes
utility
likelihood free methods
optimal design
asa
denver
iol
jsm 2019
cisea 2019
moment constraints
statistical models
poster
susceptible-infected-recovered model
nuts
no-u-turn sampler
leapfrog integrator
bayesian computational statistics
hmc
computational statistics
empirical cdf
hilbert curve
singapore
nus
jsm 2018
vancouver
partly deterministic markov process
bernstein-von-mises theorem
wales
gregynog hall
workshop
statistical learning
pkpd
mahalanobis conference
better together
poisson process
continuous time process
uniform ergodicity
imis
isaac newton institute
scalable inference
harvard university
dutch book
modelling
subjectivity
jrss
relativity
philosophy
masdoc
master
indirect inference
identifiability
method of moments
banff
non-parametrics
rkhs
birs
one model one forest
posterior normality
markov random field
abc-pmc
probability transform
fundamental theorem of simulation
readng group
ucl
london
asymptotic variance
black box method
transition kernel
mcmc algorithms
origamcmc
noisy monte carlo
reading
pseudo-marginal
lancaster
kd-tree
knn method
discretisation
infinite dimension
convergence
paradigm shift
testing
probabilistic numerics
lindley-jeffreys paradox
posterior probability
mala
random walk
langevin algorithm
reference-priors
ultimixt
cran
invariance
spatial statistics
gams
point processes
all of statistics
variance reduction
nips 2015
scalability
complex likelihood function
seattle
jsm 2015
subsampling
stopping rule
riemann integration
order statistics
non-informative prior
socks
conjudate priors
posterior mean
fiducial statistics
structural model
em algorithm
optimisation
gradient function
missing data models
asymptotic normality
mle
likelihood
cdf
glivenko-cantelli
perfect sampling
same algorithm
tempering
pygmies
admixture
buffon's needle
exam
lda
philogenies
complex models
information loss
aic
deviance
log-score
travelling salesman
simulated annealing
uniform generator
sudokus
randomness
integrated likelihood
infinite series
buffon
russian roulette
jakob bernoulli
pmcmc
bruno de finetti
pierre simon laplace
dennis lindley
thomas bayes
isba
kullback-leibler divergence
roma
garch
fisher information
detailed balance
tree pruning
u-turn
ensae
estimating equations
annals of statistics
shrinkage estimators
integration by part
normal mean estimation
likelihood ratio
statistical mathematics
neyman-pearson theory
composite null hypothesis
type ii error
critical region
type i error
r exam
ks.test
likelihood principle
khajuraho
george casella
gainesville
varanasi
paris-dauphine
efron
classics
ridge
l1 peralty
ridge regression
least squares
model selection
journal of the royal statistical society c
applied statistics
hartigan
algorithms
optimization
bayesian estimation
normal model
minimaxity
gpus
gibbs random fields
conference
independent metropolis hastings algorithm
population monte carlo
mark girolami
hamiltonian
royal statistical socie
variance
dark matter
flat universe
zero probability sets
savage-dickey ratio
measure theory
jim berger
bayesian core
texas
utsa
introducing monte carlo method with r
fruirth-schnatter
geweke
chibs method
potts model
classification
image analysis
scotland
stationarity
metropolis-hastings algorithm
society
royal
statistical
iii
yes
savagedickey
eindhoven
ratio
software
decision
tests
priors
conjugate
factor
mean
harmonic
implementation
analysis
data
jump
reversible
entropy
maximum
maxent
obayes
09
probability
objective
rjmcmc
bridge
harold
defensive
normal
metropolis-hastings
models
generalised

See more