% Note on cascaded evaluation of log probabilities: here P is the number of functions in logPfuns. % logP: A PxWxT matrix of log probabilities for each model in the % models: A MxWxT matrix with the thinned markov chains (with T samples % 'BurnIn': fraction of the chain that should be removed. % 'Parallel': Run in ensemble of walkers in parallel. % 'ProgressBar': Show a text progress bar (default=true) % 'ThinChain': Thin all the chains by only storing every N'th step (default=10) % 'StepSize': unit-less stepsize (default=2.5). % This is the total number, -NOT the number per chain. mccount: What is the desired total number of monte carlo proposals. % contain two function handles: one to the logprior and another % logPfuns: a cell of function handles returning the log probability of a % minit: an MxW matrix of initial values for each of the walkers in the % (This code uses a cascaded variant of the Goodman and Weare algorithm). % truly deserves to be called the MCMC hammer. % simpler to get to work straight out of the box, and for that reason it % can achieve much better convergence on badly scaled problems. % This is where the GW-algorithm really excels as it is affine invariant. % it is difficult to optimize the random walk for high-dimensional problems. % is that they can have slow convergence for badly scaled problems, and that The problem with many traditional MCMC samplers % invariant ensemble Markov Chain Monte Carlo (MCMC) sampler. % GWMCMC is an implementation of the Goodman and Weare 2010 Affine %% Cascaded affine invariant ensemble MCMC sampler. Function = gwmcmc( minit, logPfuns, mccount, varargin)
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |