Cookies

We use cookies to ensure that we give you the best experience on our website. By continuing to browse this repository, you give consent for essential cookies to be used. You can read more about our Privacy and Cookie Policy.


Durham e-Theses
You are in:

Aspects of Objective Priors and Computations for Bayesian Modelling

TANG, DOUDOU (2016) Aspects of Objective Priors and Computations for Bayesian Modelling. Doctoral thesis, Durham University.

[img]
Preview
PDF
3875Kb

Abstract

Bayesian statistics is flourishing nowadays not only because it provides ways to combine prior knowledge with statistical models but also because many algorithms have become available to sample from the resulting posterior distributions. However, how to specify a good objective prior can be very difficult. This is largely because ignorance does not have a unique definition. For sampling from posterior distributions, Markov Chain Monte Carlo (MCMC) methods are main tools. However, as statistical models become more and more sophisticated, there is a need for more efficient MCMC methods than the traditional ones.

For objective prior specifications, we present a new principle to express ignorance through the global distance structure. This principle allows us to assign the prior weight to points in parameter space according to their correspondences to the statistical models displayed in the structure of the global distance. This method is applied to simple problems such as location family, scale family and location-scale family. It is also applied to the one-way random effect model which attracts considerable interest from many researchers. The method considered here allows us to avoid the dependency of the priors on the experimental design, which has been seriously disputed, and enables the resulting prior to reflect how the models change with respect to the population and not the collected samples.

Of MCMC methods for sampling from posterior distributions, the Hamiltonian Monte Carlo (HMC) method is one that has the potential to avoid random-walk behaviour. It does so by exploiting ideas from Hamiltonian dynamics. Its performance, however, depends on the choice of step-size which is required by this method when numerically solving the Hamiltonian equations. We propose an algorithm, which we call HMC with stochastic step-size, to automatically tune the step-size by exploiting the local curvature information. We also present a meta-algorithm which includes HMC, HMC with stochastic step-size and the ordinary Metropolis-Hastings algorithm as a special case.

Finally, we come to a sophisticated hierarchical model developed for analysing the exco-toxicology data. We present ways to obtain more informative posterior samples by embedding the marginalized approach and advanced samplers into the entire Gibbs structure of the modified MCMCglmm algorithm provided by Craig (2013). The combination of the marginalized approach and HMC with stochastic step-size is found to be the best choice among a range of methods for the challenging problem of sampling the hyper-parameters in the model.

Item Type:Thesis (Doctoral)
Award:Doctor of Philosophy
Keywords:Phd research
Faculty and Department:Faculty of Science > Mathematical Sciences, Department of
Thesis Date:2016
Copyright:Copyright of this thesis is held by the author
Deposited On:17 Feb 2016 14:59

Social bookmarking: del.icio.usConnoteaBibSonomyCiteULikeFacebookTwitter