2.
2 The Journal is a peer-reviewed publication of the R Foundation for Statistical Computing. Communications regarding this publication should be addressed to the editors. All articles are copyrighted by the respective authors. Prospective authors will ﬁnd detailed and up-to-date submission instructions on the Journal’s homepage. Editor-in-Chief: Peter Dalgaard Center for Statistics Copenhagen Business School Solbjerg Plads 3 2000 Frederiksberg Denmark Editorial Board: Vince Carey, Martyn Plummer, and Heather Turner. Editor Programmer’s Niche: Bill Venables Editor Help Desk: Uwe Ligges Editor Book Reviews: G. Jay Kerns Department of Mathematics and Statistics Youngstown State University Youngstown, Ohio 44555-0002 USA gkerns@ysu.edu R Journal Homepage: http://journal.r-project.org/ Email of editors and editorial board: firstname.lastname@R-project.org The R Journal is indexed/abstracted by EBSCO, DOAJ.The R Journal Vol. 2/2, December 2010 ISSN 2073-4859
3.
3Editorialby Peter Dalgaard putationally demanding. Especially for transient so- lutions in two or three spatial dimensions, comput-Welcome to the 2nd issue of the 2nd volume of The ers simply were not fast enough in a time where nu-R Journal. merical performance was measured in fractions of a I am pleased to say that we can offer ten peer- MFLOPS (million ﬂoating point operations per sec-reviewed papers this time. Many thanks go to the ond). Today, the relevant measure is GFLOPS andauthors and the reviewers who ensure that our arti- we should be getting much closer to practicable so-cles live up to high academic standards. The tran- lutions.sition from R News to The R Journal is now nearly However, raw computing power is not sufﬁcient;completed. We are now listed by EBSCO and the there are non-obvious aspects of numerical analysisregistration procedure with Thomson Reuters is well that should not be taken lightly, notably issues of sta-on the way. We thereby move into the framework of bility and accuracy. There is a reason that numericalscientiﬁc journals and away from the grey-literature analysis is a scientiﬁc ﬁeld in its own right.newsletter format; however, it should be stressed From a statistician’s perspective, being able to ﬁtthat R News was a fairly high-impact piece of grey models to actual data is of prime importance. Forliterature: A cited reference search turned up around models with only a few parameters, you can get1300 references to the just over 200 papers that were quite far with nonlinear regression and a good nu-published in R News! merical solver. For ill-posed problems with func- I am particularly happy to see the paper by tional parameters (the so-called “inverse problems”),Soetart et al. on differential equation solvers. In and for stochastic differential equations, there stillmany ﬁelds of research, the natural formulation of appears to be work to be done. Soetart et al. do notmodels is via local relations at the inﬁnitesimal level, go into these issues, but I hope that their paper willrather than via closed form mathematical expres- be an inspiration for further work.sions, and quite often solutions rely on simplifying With this issue, in accordance with the rotationassumptions. My own PhD work, some 25 years ago, rules of the Editorial Board, I step down as Editor-concerned diffusion of substances within the human in-Chief, to be succeded by Heather Turner. Heathereye, with the ultimate goal of measuring the state has already played a key role in the transition fromof the blood-retinal barrier. Solutions for this prob- R News to The R Journal, as well as being probablylem could be obtained for short timespans, if one as- the most efﬁcient Associate Editor on the Board. Thesumed that the eye was completely spherical. Ex- Editorial Board will be losing last year’s Editor-in-tending the solutions to accommodate more realis- Chief, Vince Carey, who has now been on board fortic models (a necessity for ﬁtting actual experimental the full four years. We shall miss Vince, who has al-data) resulted in quite unwieldy formulas, and even ways been good for a precise and principled argu-then, did not give you the kind of modelling freedom ment and in the process taught at least me severalthat you really wanted to elucidate the scientiﬁc is- new words. We also welcome Hadley Wickham assue. a new Associate Editor and member of the Editorial In contrast, numerical procedures could fairly Board.easily be set up and modiﬁed to better ﬁt reality. Season’s greetings and best wishes for a happyThe main problem was that they tended to be com- 2011!The R Journal Vol. 2/2, December 2010 ISSN 2073-4859
4.
4The R Journal Vol. 2/2, December 2010 ISSN 2073-4859
5.
C ONTRIBUTED R ESEARCH A RTICLES 5Solving Differential Equations in Rby Karline Soetaert, Thomas Petzoldt and R. Woodrow complete repertoire of differential equations can beSetzer1 numerically solved. More speciﬁcally, the following types of differen- Abstract Although R is still predominantly ap- tial equations can now be handled with add-on pack- plied for statistical analysis and graphical repre- ages in R: sentation, it is rapidly becoming more suitable for mathematical computing. One of the ﬁelds • Initial value problems (IVP) of ordinary differ- where considerable progress has been made re- ential equations (ODE), using package deSolve cently is the solution of differential equations. (Soetaert et al., 2010b). Here we give a brief overview of differential • Initial value differential algebraic equations equations that can now be solved by R. (DAE), package deSolve . • Initial value partial differential equations (PDE), packages deSolve and ReacTranIntroduction (Soetaert and Meysman, 2010).Differential equations describe exchanges of matter, • Boundary value problems (BVP) of ordinaryenergy, information or any other quantities, often as differential equations, using package bvpSolvethey vary in time and/or space. Their thorough ana- (Soetaert et al., 2010a), or ReacTran and root-lytical treatment forms the basis of fundamental the- Solve (Soetaert, 2009).ories in mathematics and physics, and they are in- • Initial value delay differential equationscreasingly applied in chemistry, life sciences and eco- (DDE), using packages deSolve or PBSddes-nomics. olve (Couture-Beil et al., 2010). Differential equations are solved by integration,but unfortunately, for many practical applications • Stochastic differential equations (SDE), usingin science and engineering, systems of differential packages sde (Iacus, 2008) and pomp (Kingequations cannot be integrated to give an analytical et al., 2008).solution, but rather need to be solved numerically. In this short overview, we demonstrate how to Many advanced numerical algorithms that solve solve the ﬁrst four types of differential equationsdifferential equations are available as (open-source) in R. It is beyond the scope to give an exhaustivecomputer codes, written in programming languages overview about the vast number of methods to solvelike FORTRAN or C and that are available from these differential equations and their theory, so therepositories like GAMS (http://gams.nist.gov/) or reader is encouraged to consult one of the numer-NETLIB (www.netlib.org). ous textbooks (e.g., Ascher and Petzold, 1998; Press Depending on the problem, mathematical for- et al., 2007; Hairer et al., 2009; Hairer and Wanner,malisations may consist of ordinary differential 2010; LeVeque, 2007, and many others).equations (ODE), partial differential equations In addition, a large number of analytical and nu-(PDE), differential algebraic equations (DAE), or de- merical methods exists for the analysis of bifurca-lay differential equations (DDE). In addition, a dis- tions and stability properties of deterministic sys-tinction is made between initial value problems (IVP) tems, the efﬁcient simulation of stochastic differen-and boundary value problems (BVP). tial equations or the estimation of parameters. We With the introduction of R-package odesolve do not deal with these methods here.(Setzer, 2001), it became possible to use R (R Devel-opment Core Team, 2009) for solving very simple ini-tial value problems of systems of ordinary differen- Types of differential equationstial equations, using the lsoda algorithm of Hind-marsh (1983) and Petzold (1983). However, many Ordinary differential equationsreal-life applications, including physical transportmodeling, equilibrium chemistry or the modeling of Ordinary differential equations describe the changeelectrical circuits, could not be solved with this pack- of a state variable y as a function f of one independentage. variable t (e.g., time or space), of y itself, and, option- Since odesolve, much effort has been made to ally, a set of other variables p, often called parameters:improve R’s capabilities to handle differential equa-tions, mostly by incorporating published and well dy y = = f (t, y, p)tested numerical codes, such that now a much more dt 1 The views expressed in this paper are those of the authors and do not necessarily reﬂect the views or policies of the U.S. EnvironmentalProtection AgencyThe R Journal Vol. 2/2, December 2010 ISSN 2073-4859
6.
6 C ONTRIBUTED R ESEARCH A RTICLES In many cases, solving differential equations re- rithm (Press et al., 2007). Two more stable solu-quires the introduction of extra conditions. In the fol- tion methods implement a mono implicit Runge-lowing, we concentrate on the numerical treatment Kutta (MIRK) code, based on the FORTRAN codeof two classes of problems, namely initial value prob- twpbvpC (Cash and Mazzia, 2005), and the collocationlems and boundary value problems. method, based on the FORTRAN code colnew (Bader and Ascher, 1987). Some boundary value problems can also be solved with functions from packages Re-Initial value problems acTran and rootSolve (see below).If the extra conditions are speciﬁed at the initial valueof the independent variable, the differential equa- Partial differential equationstions are called initial value problems (IVP). There exist two main classes of algorithms to nu- In contrast to ODEs where there is only one indepen-merically solve such problems, so-called Runge-Kutta dent variable, partial differential equations (PDE)formulas and linear multistep formulas (Hairer et al., contain partial derivatives with respect to more than2009; Hairer and Wanner, 2010). The latter contains one independent variable, for instance t (time) andtwo important families, the Adams family and the x (a spatial dimension). To distinguish this typebackward differentiation formulae (BDF). of equations from ODEs, the derivatives are repre- Another important distinction is between explicit sented with the ∂ symbol, e.g.and implicit methods, where the latter methods can ∂y ∂ysolve a particular class of equations (so-called “stiff” = f (t, x, y, , p) ∂t ∂xequations) where explicit methods have problemswith stability and efﬁciency. Stiffness occurs for in- Partial differential equations can be solved by sub-stance if a problem has components with different dividing one or more of the continuous independentrates of variation according to the independent vari- variables in a number of grid cells, and replacing theable. Very often there will be a tradeoff between us- derivatives by discrete, algebraic approximate equa-ing explicit methods that require little work per inte- tions, so-called ﬁnite differences (cf. LeVeque, 2007;gration step and implicit methods which are able to Hundsdorfer and Verwer, 2003).take larger integration steps, but need (much) more For time-varying cases, it is customary to discre-work for one step. tise the spatial coordinate(s) only, while time is left in In R, initial value problems can be solved with continuous form. This is called the method-of-lines,functions from package deSolve (Soetaert et al., and in this way, one PDE is translated into a large2010b), which implements many solvers from ODE- number of coupled ordinary differential equations,PACK (Hindmarsh, 1983), the code vode (Brown that can be solved with the usual initial value prob-et al., 1989), the differential algebraic equation solver lem solvers (cf. Hamdi et al., 2007). This applies todaspk (Brenan et al., 1996), all belonging to the linear parabolic PDEs such as the heat equation, and to hy-multistep methods, and comprising Adams meth- perbolic PDEs such as the wave equation.ods as well as backward differentiation formulae. For time-invariant problems, usually all indepen-The former methods are explicit, the latter implicit. dent variables are discretised, and the derivatives ap-In addition, this package contains a de-novo imple- proximated by algebraic equations, which are solvedmentation of a rather general Runge-Kutta solver by root-ﬁnding techniques. This technique applies tobased on Dormand and Prince (1980); Prince and elliptic PDEs.Dormand (1981); Bogacki and Shampine (1989); Cash R-package ReacTran provides functions to gener-and Karp (1990) and using ideas from Butcher (1987) ate ﬁnite differences on a structured grid. After that,and Press et al. (2007). Finally, the implicit Runge- the resulting time-varying cases can be solved withKutta method radau (Hairer et al., 2009) has been specially-designed functions from package deSolve,added recently. while time-invariant cases can be solved with root- solving methods from package rootSolve .Boundary value problems Differential algebraic equationsIf the extra conditions are speciﬁed at different Differential-algebraic equations (DAE) contain avalues of the independent variable, the differen- mixture of differential ( f ) and algebraic equationstial equations are called boundary value problems (g), the latter e.g. for maintaining mass-balance con-(BVP). A standard textbook on this subject is Ascher ditions:et al. (1995). Package bvpSolve (Soetaert et al., 2010a) imple- y = f (t, y, p)ments three methods to solve boundary value prob- 0 = g(t, y, p)lems. The simplest solution method is the singleshooting method, which combines initial value prob- Important for the solution of a DAE is its index.lem integration with a nonlinear root ﬁnding algo- The index of a DAE is the number of differentiationsThe R Journal Vol. 2/2, December 2010 ISSN 2073-4859
7.
C ONTRIBUTED R ESEARCH A RTICLES 7needed until a system consisting only of ODEs is ob- Numerical accuracytained. Function daspk (Brenan et al., 1996) from pack- Numerical solution of a system of differential equa-age deSolve solves (relatively simple) DAEs of index tions is an approximation and therefore prone to nu-at most 1, while function radau (Hairer et al., 2009) merical errors, originating from several sources:solves DAEs of index up to 3. 1. time step and accuracy order of the solver, 2. ﬂoating point arithmetics,Implementation details 3. properties of the differential system and stabil- ity of the solution algorithm.The implemented solver functions are explained bymeans of the ode-function, used for the solution of For methods with automatic stepsize selection,initial value problems. The interfaces to the other accuracy of the computation can be adjusted us-solvers have an analogous deﬁnition: ing the non-negative arguments atol (absolute tol- erance) and rtol (relative tolerance), which control ode(y, times, func, parms, method = c("lsoda", the local errors of the integration. "lsode", "lsodes", "lsodar", Like R itself, all solvers use double-precision "vode", "daspk", "euler", "rk4", ﬂoating-point arithmetics according to IEEE Stan- "ode23", "ode45", "radau", "bdf", dard 754 (2008), which means that it can represent "bdf_d", "adams", "impAdams", numbers between approx. ±2.25 10−308 to approx. "impAdams_d"), ...) ±1.8 10308 and with 16 signiﬁcant digits. It is there- fore not advisable to set rtol below 10−16 , except set- To use this, the system of differential equations ting it to zero with the intention to use absolute tol-can be deﬁned as an R-function (func) that computes erance exclusively.derivatives in the ODE system (the model deﬁnition) The solvers provided by the packages presentedaccording to the independent variable (e.g. time t). below have proven to be quite robust in most prac-func can also be a function in a dynamically loaded tical cases, however users should always be awareshared library (Soetaert et al., 2010c) and, in addition, about the problems and limitations of numericalsome solvers support also the supply of an analyti- methods and carefully check results for plausibil-cally derived function of partial derivatives (Jacobian ity. The section “Troubleshooting” in the package vi-matrix). gnette (Soetaert et al., 2010d) should be consulted as If func is an R-function, it must be deﬁned as: a ﬁrst source for solving typical problems. func <- function(t, y, parms, ...)where t is the actual value of the independent vari- Examplesable (e.g. the current time point in the integration),y is the current estimate of the variables in the ODE An initial value ODEsystem, parms is the parameter vector and ... can beused to pass additional arguments to the function. Consider the famous van der Pol equation (van der The return value of func should be a list, whose Pol and van der Mark, 1927), that describes a non-ﬁrst element is a vector containing the derivatives conservative oscillator with non-linear damping andof y with respect to t, and whose next elements are which was originally developed for electrical cir-optional global values that can be recorded at each cuits employing vacuum tubes. The oscillation is de-point in times. The derivatives must be speciﬁed in scribed by means of a 2nd order ODE:the same order as the state variables y. Depending on the algorithm speciﬁed in argu- z − µ (1 − z2 ) z + z = 0ment method, numerical simulation proceeds either Such a system can be routinely rewritten as a systemexactly at the time steps speciﬁed in times, or us- of two 1st order ODEs, if we substitute z with y1 anding time steps that are independent from times and z with y2 :where the output is generated by interpolation. Withthe exception of method euler and several ﬁxed-step y1 = y2Runge-Kutta methods all algorithms have automatic y2 = µ · (1 − y1 2 ) · y2 − y1time stepping, which can be controlled by setting ac-curacy requirements (see below) or by using optional There is one parameter, µ, and two differentialarguments like hini (initial time step), hmin (minimal variables, y1 and y2 with initial values (at t = 0):time step) and hmax (maximum time step). Speciﬁcdetails, e.g. about the applied interpolation methods y 1( t =0) = 2can be found in the manual pages and the original y 2( t =0) = 0literature cited there.The R Journal Vol. 2/2, December 2010 ISSN 2073-4859
8.
8 C ONTRIBUTED R ESEARCH A RTICLES The van der Pol equation is often used as a testproblem for ODE solvers, as, for large µ, its dy- IVP ODE, stiffnamics consists of parts where the solution changes 2very slowly, alternating with regions of very sharpchanges. This “stiffness” makes the equation quite 1challenging to solve. 0 y In R, this model is implemented as a function(vdpol) whose inputs are the current time (t), the val- −1ues of the state variables (y), and the parameters (mu); −2the function returns a list with as ﬁrst element thederivatives, concatenated. 0 500 1000 1500 2000 2500 3000 time vdpol <- function (t, y, mu) { list(c( y[2], Figure 1: Solution of the van der Pol equation, an mu * (1 - y[1]^2) * y[2] - y[1] initial value ordinary differential equation, stiff case, )) µ = 1000. } After deﬁning the initial condition of the state IVP ODE, nonstiffvariables (yini), the model is solved, and output 2written at selected time points (times), using de-Solve’s integration function ode. The default rou- 1tine lsoda, which is invoked by ode automatically 0 yswitches between stiff and non-stiff methods, de-pending on the problem (Petzold, 1983). −1 We run the model for a typically stiff (mu = 1000) −2and nonstiff (mu = 1) situation: 0 5 10 15 20 25 30 library(deSolve) time yini <- c(y1 = 2, y2 = 0) stiff <- ode(y = yini, func = vdpol, times = 0:3000, parms = 1000) Figure 2: Solution of the van der Pol equation, an initial value ordinary differential equation, non-stiff nonstiff <- ode(y = yini, func = vdpol, case, µ = 1. times = seq(0, 30, by = 0.01), parms = 1) solver non-stiff stiff The model returns a matrix, of class deSolve, ode23 0.37 271.19with in its ﬁrst column the time values, followed by lsoda 0.26 0.23the values of the state variables: adams 0.13 616.13 bdf 0.15 0.22 head(stiff, n = 3) radau 0.53 0.72 time y1 y2 Table 1: Comparison of solvers for a stiff and a[1,] 0 2.000000 0.0000000000 non-stiff parametrisation of the van der Pol equation[2,] 1 1.999333 -0.0006670373 (time in seconds, mean values of ten simulations on[3,] 2 1.998666 -0.0006674088 an AMD AM2 X2 3000 CPU). Figures are generated using the S3 plot method A comparison of timings for two explicit solvers,for objects of class deSolve: the Runge-Kutta method (ode23) and the adams method, with the implicit multistep solver (bdf, plot(stiff, type = "l", which = "y1", backward differentiation formula) shows a clear ad- lwd = 2, ylab = "y", vantage for the latter in the stiff case (Figure 1). The main = "IVP ODE, stiff") default solver (lsoda) is not necessarily the fastest, but shows robust behavior due to automatic stiff- plot(nonstiff, type = "l", which = "y1", ness detection. It uses the explicit multistep Adams lwd = 2, ylab = "y", method for the non-stiff case and the BDF method main = "IVP ODE, nonstiff") for the stiff case. The accuracy is comparable for allThe R Journal Vol. 2/2, December 2010 ISSN 2073-4859
9.
C ONTRIBUTED R ESEARCH A RTICLES 9solvers with atol = rtol = 10−6 , the default. xi <- 0.0025 analytic <- cos(pi * x) + exp((x - 1)/sqrt(xi)) + exp(-(x + 1)/sqrt(xi))A boundary value ODE max(abs(analytic - twp[, 2]))The webpage of Jeff Cash (Cash, 2009) contains many [1] 7.788209e-10test cases, including their analytical solution (see be-low), that BVP solvers should be able to solve. We A similar low discrepancy (4 · 10−11 ) is noted foruse equation no. 14 from this webpage as an exam- the ξ = 0.0001 as solved by bvpcol; the shootingple: method is considerably less precise (1.4 · 10−5 ), al- though the same tolerance (atol = 10−8 ) was used ξy − y = −(ξπ 2 + 1) cos(πx ) for all runs.on the interval [−1, 1], and subject to the boundary The plot shows how the shape of the solutionconditions: is affected by the parameter ξ, becoming more and more steep near the boundaries, and therefore more y( x=−1) = 0 and more difﬁcult to solve, as ξ gets smaller. y( x=+1) = 0 plot(shoot[, 1], shoot[, 2], type = "l", lwd = 2, ylim = c(-1, 1), col = "blue",The second-order equation ﬁrst is rewritten as two xlab = "x", ylab = "y", main = "BVP ODE")ﬁrst-order equations: lines(twp[, 1], twp[, 2], col = "red", lwd = 2) lines(coll[, 1], coll[, 2], col = "green", lwd = 2) y1 = y2 legend("topright", legend = c("0.01", "0.0025", y2 = 1/ξ · (y1 − (ξπ 2 + 1) cos(πx )) "0.0001"), col = c("blue", "red", "green"), title = expression(xi), lwd = 2)It is implemented in R as: BVP ODE Prob14 <- function(x, y, xi) { ξ 1.0 list(c( 0.01 y[2], 0.0025 0.0001 1/xi * (y[1] - (xi*pi*pi+1) * cos(pi*x)) 0.5 )) } 0.0With decreasing values of ξ, this problem becomes yincreasingly difﬁcult to solve. We use three val-ues of ξ, and solve the problem with the shooting, −0.5the MIRK and the collocation method (Ascher et al.,1995). Note how the initial conditions yini and the con-ditions at the end of the integration interval yend −1.0are speciﬁed, where NA denotes that the value is not −1.0 −0.5 0.0 0.5 1.0known. The independent variable is called x here(rather than times in ode). x library(bvpSolve) x <- seq(-1, 1, by = 0.01) Figure 3: Solution of the BVP ODE problem, for dif- shoot <- bvpshoot(yini = c(0, NA), ferent values of parameter ξ. yend = c(0, NA), x = x, parms = 0.01, func = Prob14) Differential algebraic equations twp <- bvptwp(yini = c(0, NA), yend = c(0, The so called “Rober problem” describes an auto- NA), x = x, parms = 0.0025, catalytic reaction (Robertson, 1966) between three func = Prob14) chemical species, y1 , y2 and y3 . The problem can be coll <- bvpcol(yini = c(0, NA), formulated either as an ODE (Mazzia and Magherini, yend = c(0, NA), x = x, parms = 1e-04, 2008), or as a DAE: func = Prob14) y1 = −0.04y1 + 104 y2 y3The numerical approximation generated by bvptwp y2 = 0.04y1 − 104 y2 y3 − 3107 y2 2is very close to the analytical solution, e.g. for ξ =0.0025: 1 = y1 + y2 + y3The R Journal Vol. 2/2, December 2010 ISSN 2073-4859
10.
10 C ONTRIBUTED R ESEARCH A RTICLES IVP DAE where the ﬁrst two equations are differential y1 y2equations that specify the dynamics of chemicalspecies y1 and y2 , while the third algebraic equation 0.8ensures that the summed concentration of the three 2e−05 conc. conc.species remains 1. 0.4 The DAE has to be speciﬁed by the residual func- 0e+00tion instead of the rates of change (as in ODEs). 0.0 1e−06 1e+00 1e+06 1e−06 1e+00 1e+06 r1 = −y1 − 0.04y1 + 104 y2 y3 time time r2 = −y2 + 0.04y1 − 104 y2 y3 − 3 107 y2 2 y3 error r3 = −1 + y1 + y2 + y3 1e−09 0.8 Implemented in R this becomes: −2e−09 conc. conc. 0.4 daefun<-function(t, y, dy, parms) { −5e−09 res1 <- - dy[1] - 0.04 * y[1] + 0.0 1e4 * y[2] * y[3] 1e−06 1e+00 1e+06 1e−06 1e+00 1e+06 res2 <- - dy[2] + 0.04 * y[1] - 1e4 * y[2] * y[3] - 3e7 * y[2]^2 time time res3 <- y[1] + y[2] + y[3] - 1 list(c(res1, res2, res3), error = as.vector(y[1] + y[2] + y[3]) - 1) Figure 4: Solution of the DAE problem for the sub- } stances y1 , y2 , y3 ; mass balance error: deviation of to- tal sum from one. yini <- c(y1 = 1, y2 = 0, y3 = 0) dyini <- c(-0.04, 0.04, 0) times <- 10 ^ seq(-6,6,0.1) Partial differential equations In partial differential equations (PDE), the func- The input arguments of function daefun are the tion has several independent variables (e.g. time andcurrent time (t), the values of the state variables and depth) and contains their partial derivatives.their derivatives (y, dy) and the parameters (parms). Many partial differential equations can be solvedIt returns the residuals, concatenated and an output by numerical approximation (ﬁnite differencing) af-variable, the error in the algebraic equation. The lat- ter rewriting them as a set of ODEs (see Schiesser,ter is added to check upon the accuracy of the results. 1991; LeVeque, 2007; Hundsdorfer and Verwer, For DAEs solved with daspk, both the state vari- 2003).ables and their derivatives need to be initialised (y Functions tran.1D, tran.2D, and tran.3D fromand dy). Here we make sure that the initial condi- R package ReacTran (Soetaert and Meysman, 2010)tions for y obey the algebraic constraint, while also implement ﬁnite difference approximations of thethe initial condition of the derivatives is consistent diffusive-advective transport equation which, for thewith the dynamics. 1-D case, is: library(deSolve) 1 ∂ ∂C ∂ print(system.time(out <-daspk(y = yini, − · Ax −D · − ( Ax · u · C) dy = dyini, times = times, res = daefun, Ax ∂x ∂x ∂x parms = NULL))) Here D is the “diffusion coefﬁcient”, u is the “advec- user system elapsed tion rate”, and A x is some property (e.g. surface area) 0.07 0.00 0.11 that depends on the independent variable, x. It should be noted that the accuracy of the ﬁnite An S3 plot method can be used to plot all vari- difference approximations can not be speciﬁed in theables at once: ReacTran functions. It is up to the user to make sure that the solutions are sufﬁciently accurate, e.g. by in- plot(out, ylab = "conc.", xlab = "time", cluding more grid points. type = "l", lwd = 2, log = "x") mtext("IVP DAE", side = 3, outer = TRUE, line = -1) One dimensional PDE There is a very fast initial change in concentra- Diffusion-reaction models are a fundamental class oftions, mainly due to the quick reaction between y1 models which describe how concentration of matter,and y2 and amongst y2 . After that, the slow reaction energy, information, etc. evolves in space and timeof y1 with y2 causes the system to change much more under the inﬂuence of diffusive transport and trans-smoothly. This is typical for stiff problems. formation (Soetaert and Herman, 2009).The R Journal Vol. 2/2, December 2010 ISSN 2073-4859
11.
C ONTRIBUTED R ESEARCH A RTICLES 11 As an example, consider the 1-D diffusion- user system elapsedreaction model in [0, 10]: 0.02 0.00 0.02 ∂C ∂ ∂C The values of the state-variables (y) are plotted = D· −Q ∂t ∂x ∂x against the distance, in the middle of the grid cells (Grid$x.mid).with C the concentration, t the time, x the distancefrom the origin, Q, the consumption rate, and with plot (Grid$x.mid, std$y, type = "l",boundary conditions (values at the model edges): lwd = 2, main = "steady-state PDE", xlab = "x", ylab = "C", col = "red") ∂C =0 ∂x x=0 steady−state PDE Cx=10 = Cext 20To solve this model in R, ﬁrst the 1-D model Grid is 10deﬁned; it divides 10 cm (L) into 1000 boxes (N). 0 C library(ReacTran) −10 Grid <- setup.grid.1D(N = 1000, L = 10) −30 The model equation includes a transport term,approximated by ReacTran function tran.1D and 0 2 4 6 8 10a consumption term (Q). The downstream bound- xary condition, prescribed as a concentration (C.down)needs to be speciﬁed, the zero-gradient at the up-stream boundary is the default: Figure 5: Steady-state solution of the 1-D diffusion- reaction model. pde1D <-function(t, C, parms) { tran <- tran.1D(C = C, D = D, The analytical solution compares well with the C.down = Cext, dx = Grid)$dC numerical approximation: list(tran - Q) # return value: rate of change } analytical <- Q/2/D*(Grid$x.mid^2 - 10^2) + Cext max(abs(analytical - std$y)) The model parameters are: [1] 1.250003e-05 D <- 1 # diffusion constant Q <- 1 # uptake rate Next the model is run dynamically for 100 time Cext <- 20 units using deSolve function ode.1D, and starting with a uniform concentration: In a ﬁrst application, the model is solved tosteady-state, which retrieves the condition where the require(deSolve)concentrations are invariant: times <- seq(0, 100, by = 1) ∂ ∂C system.time( 0= D· −Q out <- ode.1D(y = rep(1, Grid$N), ∂x ∂x times = times, func = pde1D, parms = NULL, nspec = 1)In R, steady-state conditions can be estimated using )functions from package rootSolve which implementamongst others a Newton-Raphson algorithm (Press user system elapsedet al., 2007). For 1-dimensional models, steady.1D is 0.61 0.02 0.63most efﬁcient. The initial “guess” of the steady-statesolution (y) is unimportant; here we take simply N Here, out is a matrix, whose 1st column containsrandom numbers. Argument nspec = 1 informs the the output times, and the next columns the values ofsolver that only one component is described. the state variables in the different boxes; we print the Although a system of 1000 equations needs to be ﬁrst columns of the last three rows of this matrix:solved, this takes only a fraction of a second: tail(out[, 1:4], n = 3) library(rootSolve) print(system.time( time 1 2 3 std <- steady.1D(y = runif(Grid$N), [99,] 98 -27.55783 -27.55773 -27.55754 func = pde1D, parms = NULL, nspec = 1) [100,] 99 -27.61735 -27.61725 -27.61706 )) [101,] 100 -27.67542 -27.67532 -27.67513The R Journal Vol. 2/2, December 2010 ISSN 2073-4859
12.
12 C ONTRIBUTED R ESEARCH A RTICLES We plot the result using a blue-yellow-red color R code, or in compiled languages. However, com-scheme, and using deSolve’s S3 method image. Fig- pared to odesolve, it includes a more complete seture 6 shows that, as time proceeds, gradients develop of integrators, and a more extensive set of options tofrom the uniform distribution, until the system al- tune the integration routines, it provides more com-most reaches steady-state at the end of the simula- plete output, and has extended the applicability do-tion. main to include also DDEs, DAEs and PDEs. Thanks to the DAE solvers daspk (Brenan et al., image(out, xlab = "time, days", 1996) and radau (Hairer and Wanner, 2010) it is now ylab = "Distance, cm", also possible to model electronic circuits or equilib- main = "PDE", add.contour = TRUE) rium chemical systems. These problems are often of index ≤ 1. In many mechanical systems, physical constraints lead to DAEs of index up to 3, and these PDE more complex problems can be solved with radau. 1.0 The inclusion of BVP and PDE solvers have 15 10 opened up the application area to the ﬁeld of re- 5 active transport modelling (Soetaert and Meysman, 0.8 0 2010), such that R can now be used to describe quan- −5 tities that change not only in time, but also along one or more spatial axes. We use it to model how 0.6 −10Distance, cm −15 ecosystems change along rivers, or in sediments, but it could equally serve to model the growth of a tu- 0.4 −20 mor in human brains, or the dispersion of toxicants in human tissues. −25 The open source matrix language R has great po- 0.2 tential for dynamic modelling, and the tools cur- rently available are suitable for solving a wide va- riety of practical and scientiﬁc problems. The perfor- 0.0 0 20 40 60 80 100 mance is sufﬁcient even for larger systems, especially time, days when models can be formulated using matrix alge- bra or are implemented in compiled languages like C or Fortran (Soetaert et al., 2010b). Indeed, thereFigure 6: Dynamic solution of the 1-D diffusion- is emerging interest in performing statistical analysisreaction model. on differential equations, e.g. in package nlmeODE (Tornøe et al., 2004) for ﬁtting non-linear mixed- It should be noted that the steady-state model is effects models using differential equations, pack-effectively a boundary value problem, while the tran- age FME (Soetaert and Petzoldt, 2010) for sensitiv-sient model is a prototype of a “parabolic” partial dif- ity analysis, parameter estimation and Markov chainferential equation (LeVeque, 2007). Monte-Carlo analysis or package ccems for combina- Whereas R can also solve the other two main torially complex equilibrium model selection (Radi-classes of PDEs, i.e. of the “hyperbolic” and “ellip- voyevitch, 2008).tic” type, it is well beyond the scope of this paper to However, there is ample room for extensionselaborate on that. and improvements. For instance, the PDE solvers are quite memory intensive, and could beneﬁt from the implementation of sparse matrix solvers that areDiscussion more efﬁcient in this respect2 . In addition, the meth- ods implemented in ReacTran handle equations de-Although R is still predominantly applied for statis- ﬁned on very simple shapes only. Extending thetical analysis and graphical representation, it is more PDE approach to ﬁnite elements (Strang and Fix,and more suitable for mathematical computing, e.g. 1973) would open up the application domain of R toin the ﬁeld of matrix algebra (Bates and Maechler, any irregular geometry. Other spatial discretisation2008). Thanks to the differential equation solvers, R schemes could be added, e.g. for use in ﬂuid dynam-is also emerging as a powerful environment for dy- ics.namic simulations (Petzoldt, 2003; Soetaert and Her- Our models are often applied to derive unknownman, 2009; Stevens, 2009). parameters by ﬁtting them against data; this relies on The new package deSolve has retained all the the availability of apt parameter ﬁtting algorithms.funtionalities of its predecessor odesolve (Setzer, Discussion of these items is highly welcomed, in2001), such as the potential to deﬁne models both in the new special interest group about dynamic mod- 2 for instance, the “preconditioned Krylov” part of the daspk method is not yet supported 3 https://stat.ethz.ch/mailman/listinfo/r-sig-dynamic-modelsThe R Journal Vol. 2/2, December 2010 ISSN 2073-4859
13.
C ONTRIBUTED R ESEARCH A RTICLES 13els3 in R. E. Hairer, S. P. Noørsett, and G. Wanner. Solving Ordi- nary Differential Equations I: Nonstiff Problems. Sec- ond Revised Edition. Springer-Verlag, Heidelberg,Bibliography 2009.U. Ascher, R. Mattheij, and R. Russell. Numerical So- S. Hamdi, W. E. Schiesser, and G. W. Grifﬁths. lution of Boundary Value Problems for Ordinary Dif- Method of lines. Scholarpedia, 2(7):2859, 2007. ferential Equations. Philadelphia, PA, 1995. A. C. Hindmarsh. ODEPACK, a systematized collec- tion of ODE solvers. In R. Stepleman, editor, Scien-U. M. Ascher and L. R. Petzold. Computer Methods tiﬁc Computing, Vol. 1 of IMACS Transactions on Sci- for Ordinary Differential Equations and Differential- entiﬁc Computation, pages 55–64. IMACS / North- Algebraic Equations. SIAM, Philadelphia, 1998. Holland, Amsterdam, 1983.G. Bader and U. Ascher. A new basis implementa- W. Hundsdorfer and J. Verwer. Numerical Solution of tion for a mixed order boundary value ODE solver. Time-Dependent Advection-Diffusion-Reaction Equa- SIAM J. Scient. Stat. Comput., 8:483–500, 1987. tions. Springer Series in Computational Mathematics.D. Bates and M. Maechler. Matrix: A Matrix Package Springer-Verlag, Berlin, 2003. for R, 2008. R package version 0.999375-9. S. M. Iacus. sde: Simulation and Inference for Stochas- tic Differential Equations, 2008. R package versionP. Bogacki and L. Shampine. A 3(2) pair of Runge- 2.0.3. Kutta formulas. Appl. Math. Lett., 2:1–9, 1989. IEEE Standard 754. Ieee standard for ﬂoating-pointK. E. Brenan, S. L. Campbell, and L. R. Pet- arithmetic, Aug 2008. zold. Numerical Solution of Initial-Value Problems in Differential-Algebraic Equations. SIAM Classics in A. A. King, E. L. Ionides, and C. M. Breto. pomp: Sta- Applied Mathematics, 1996. tistical Inference for Partially Observed Markov Pro- cesses, 2008. R package version 0.21-3.P. N. Brown, G. D. Byrne, and A. C. Hindmarsh. VODE, a variable-coefﬁcient ode solver. SIAM J. R. J. LeVeque. Finite Difference Methods for Ordinary Sci. Stat. Comput., 10:1038–1051, 1989. and Partial Differential Equations, Steady State and Time Dependent Problems. SIAM, 2007.J. C. Butcher. The Numerical Analysis of Ordinary Dif- ferential Equations, Runge-Kutta and General Linear F. Mazzia and C. Magherini. Test Set for Initial Value Methods. Wiley, Chichester, New York, 1987. Problem Solvers, release 2.4. Department of Mathe- matics, University of Bari, Italy, 2008. URL http://J. R. Cash. 35 Test Problems for Two Way Point Bound- pitagora.dm.uniba.it/~testset. Report 4/2008. ary Value Problems, 2009. URL http://www.ma.ic. ac.uk/~jcash/BVP_software/PROBLEMS.PDF. L. R. Petzold. Automatic selection of methods for solving stiff and nonstiff systems of ordinary dif-J. R. Cash and A. H. Karp. A variable order ferential equations. SIAM J. Sci. Stat. Comput., 4: Runge-Kutta method for initial value problems 136–148, 1983. with rapidly varying right-hand sides. ACM Trans- T. Petzoldt. R as a simulation platform in ecological actions on Mathematical Software, 16:201–222, 1990. modelling. R News, 3(3):8–16, 2003.J. R. Cash and F. Mazzia. A new mesh selection W. H. Press, S. A. Teukolsky, W. T. Vetterling, and algorithm, based on conditioning, for two-point B. P. Flannery. Numerical Recipes: The Art of Scien- boundary value codes. J. Comput. Appl. Math., 184: tiﬁc Computing. Cambridge University Press, 3rd 362–381, 2005. edition, 2007.A. Couture-Beil, J. T. Schnute, and R. Haigh. PB- P. J. Prince and J. R. Dormand. High order embed- Sddesolve: Solver for Delay Differential Equations, ded Runge-Kutta formulae. J. Comput. Appl. Math., 2010. R package version 1.08.11. 7:67–75, 1981.J. R. Dormand and P. J. Prince. A family of embed- R Development Core Team. R: A Language and Envi- ded Runge-Kutta formulae. J. Comput. Appl. Math., ronment for Statistical Computing. R Foundation for 6:19–26, 1980. Statistical Computing, Vienna, Austria, 2009. URL http://www.R-project.org. ISBN 3-900051-07-0.E. Hairer and G. Wanner. Solving Ordinary Differen- tial Equations II: Stiff and Differential-Algebraic Prob- T. Radivoyevitch. Equilibrium model selection: lems. Second Revised Edition. Springer-Verlag, Hei- dTTP induced R1 dimerization. BMC Systems Bi- delberg, 2010. ology, 2:15, 2008.The R Journal Vol. 2/2, December 2010 ISSN 2073-4859
14.
14 C ONTRIBUTED R ESEARCH A RTICLESH. H. Robertson. The solution of a set of reaction rate equations. In J. Walsh, editor, Numerical Analysis: Karline Soetaert An Introduction, pages 178–182. Academic Press, Netherlands Institute of Ecology London, 1966. K.Soetaert@nioo.knaw.nlW. E. Schiesser. The Numerical Method of Lines: In- Thomas Petzoldt tegration of Partial Differential Equations. Academic Technische Universität Dresden Press, San Diego, 1991. Thomas.Petzoldt@tu-dresden.deR. W. Setzer. The odesolve Package: Solvers for Ordi- R. Woodrow Setzer nary Differential Equations, 2001. R package version US Environmental Protection Agency 0.1-1. Setzer.Woodrow@epamail.epa.govK. Soetaert. rootSolve: Nonlinear Root Finding, Equi- librium and Steady-State Analysis of Ordinary Differ- ential Equations, 2009. R package version 1.6.K. Soetaert and P. M. J. Herman. A Practical Guide to Ecological Modelling. Using R as a Simulation Plat- form. Springer, 2009. ISBN 978-1-4020-8623-6.K. Soetaert and F. Meysman. ReacTran: Reactive Transport Modelling in 1D, 2D and 3D, 2010. R pack- age version 1.2.K. Soetaert and T. Petzoldt. Inverse modelling, sensi- tivity and Monte Carlo analysis in R using package FME. Journal of Statistical Software, 33(3):1–28, 2010. URL http://www.jstatsoft.org/v33/i03/.K. Soetaert, J. R. Cash, and F. Mazzia. bvpSolve: Solvers for Boundary Value Problems of Ordinary Dif- ferential Equations, 2010a. R package version 1.2.K. Soetaert, T. Petzoldt, and R. W. Setzer. Solving dif- ferential equations in R: Package deSolve. Journal of Statistical Software, 33(9):1–25, 2010b. ISSN 1548- 7660. URL http://www.jstatsoft.org/v33/i09.K. Soetaert, T. Petzoldt, and R. W. Setzer. R Pack- age deSolve: Writing Code in Compiled Languages, 2010c. deSolve vignette - R package version 1.8.K. Soetaert, T. Petzoldt, and R. W. Setzer. R Package deSolve: Solving Initial Value Differential Equations, 2010d. deSolve vignette - R package version 1.8.M. H. H. Stevens. A Primer of Ecology with R. Use R Series. Springer, 2009. ISBN: 978-0-387-89881-0.G. Strang and G. Fix. An Analysis of The Finite Element Method. Prentice Hall, 1973.C. W. Tornøe, H. Agersø, E. N. Jonsson, H. Mad- sen, and H. A. Nielsen. Non-linear mixed-effects pharmacokinetic/pharmacodynamic modelling in nlme using differential equations. Computer Meth- ods and Programs in Biomedicine, 76:31–40, 2004.B. van der Pol and J. van der Mark. Frequency de- multiplication. Nature, 120:363–364, 1927.The R Journal Vol. 2/2, December 2010 ISSN 2073-4859
15.
C ONTRIBUTED R ESEARCH A RTICLES 15 Table 2: Summary of the main functions that solve differential equations. Function Package Description ode deSolve IVP of ODEs, full, banded or arbitrary sparse Jacobian ode.1D deSolve IVP of ODEs resulting from 1-D reaction-transport problems ode.2D deSolve IVP of ODEs resulting from 2-D reaction-transport problems ode.3D deSolve IVP of ODEs resulting from 3-D reaction-transport problems daspk deSolve IVP of DAEs of index ≤ 1, full or banded Jacobian radau deSolve IVP of DAEs of index ≤ 3, full or banded Jacobian dde PBSddesolve IVP of delay differential equations, based on Runge-Kutta formu- lae dede deSolve IVP of delay differential equations, based on Adams and BDF for- mulae bvpshoot bvpSolve BVP of ODEs; the shooting method bvptwp bvpSolve BVP of ODEs; mono-implicit Runge-Kutta formula bvpcol bvpSolve BVP of ODEs; collocation formula steady rootSolve steady-state of ODEs; full, banded or arbitrary sparse Jacobian steady.1D rootSolve steady-state of ODEs resulting from 1-D reaction-transport prob- lems steady.2D rootSolve steady-state of ODEs resulting from 2-D reaction-transport prob- lems steady.3D rootSolve steady-state of ODEs resulting from 3-D reaction-transport prob- lems tran.1D ReacTran numerical approximation of 1-D advective-diffusive transport problems tran.2D ReacTran numerical approximation of 2-D advective-diffusive transport problems tran.3D ReacTran numerical approximation of 3-D advective-diffusive transport problems Table 3: Summary of the auxilliary functions that solve differential equations. Function Package Description lsoda deSolve IVP ODEs, full or banded Jacobian, automatic choice for stiff or non-stiff method lsodar deSolve same as lsoda, but includes a root-solving procedure. lsode, vode deSolve IVP ODEs, full or banded Jacobian, user speciﬁes if stiff or non- stiff lsodes deSolve IVP ODEs, arbitrary sparse Jacobian, stiff method rk4, rk, euler deSolve IVP ODEs, using Runge-Kutta and Euler methods zvode deSolve IVP ODEs, same as vode, but for complex variables runsteady rootSolve steady-state ODEs by dynamically running, full or banded Jaco- bian stode rootSolve steady-state ODEs by Newton-Raphson method, full or banded Jacobian stodes rootSolve steady-state ODEs by Newton-Raphson method, arbitrary sparse JacobianThe R Journal Vol. 2/2, December 2010 ISSN 2073-4859
16.
16 C ONTRIBUTED R ESEARCH A RTICLESSource Referencesby Duncan Murdoch [1] 3 Abstract Since version 2.10.0, R includes ex- > typeof(parsed) panded support for source references in R code [1] "expression" and ‘.Rd’ ﬁles. This paper describes the origin and purposes of source references, and current The ﬁrst element is the assignment, the second ele- and future support for them. ment is the for loop, and the third is the single x at the end:One of the strengths of R is that it allows “compu- > parsed[[1]]tation on the language”, i.e. the parser returns an Robject which can be manipulated, not just evaluated. x <- 1:10This has applications in quality control checks, de- > parsed[[2]]bugging, and elsewhere. For example, the codetoolspackage (Tierney, 2009) examines the structure of for (i in x) {parsed source code to look for common program- print(i)ming errors. Functions marked by debug() can be }executed one statement at a time, and the trace() > parsed[[3]]function can insert debugging statements into anyfunction. x Computing on the language is often enhanced by The ﬁrst two elements are both of type "language",being able to refer to the original source code, rather and are made up of smaller components. The dif-than just to a deparsed (reconstructed) version of ference between an "expression" and a "language"it based on the parsed object. To support this, we object is mainly internal: the former is based on theadded source references to R 2.5.0 in 2007. These are generic vector type (i.e. type "list"), whereas theattributes attached to the result of parse() or (as of latter is based on the "pairlist" type. Pairlists are2.10.0) parse_Rd() to indicate where a particular part rarely encountered explicitly in any other context.of an object originated. In this article I will describe From a user point of view, they act just like generictheir structure and how they are used in R. The arti- vectors.cle is aimed at developers who want to create debug- The third element x is of type "symbol". There aregers or other tools that use the source references, at other possible types, such as "NULL", "double", etc.:users who are curious about R internals, and also at essentially any simple R object could be an element.users who want to use the existing debugging facili- The comments in the source code and the whiteties. The latter group may wish to skip over the gory space making up the indentation of the third line aredetails and go directly to the section “Using Source not part of the parsed object.References". The parse_Rd() function parses ‘.Rd’ documenta- tion ﬁles. It also returns a recursive structure contain-The R parsers ing objects of different types (Murdoch and Urbanek, 2009; Murdoch, 2010).We start with a quick introduction to the R parser.The parse() function returns an R object of type"expression". This is a list of statements; the state- Source reference structurements can be of various types. For example, consider As described above, the result of parse() is essen-the R source shown in Figure 1. tially a list (the "expression" object) of objects that1: x <- 1:10 # Initialize x may be lists (the "language" objects) themselves, and2: for (i in x) { so on recursively. Each element of this structure from3: print(i) # Print each entry the top down corresponds to some part of the source4: } ﬁle used to create it: in our example, parse[[1]] cor-5: x responds to the ﬁrst line of ‘sample.R’, parse[[2]] is the second through fourth lines, and parse[[3]] is Figure 1: The contents of ‘sample.R’. the ﬁfth line. The comments and indentation, though helpful If we parse this ﬁle, we obtain an expression of to the human reader, are not part of the parsed object.length 3: However, by default the parsed object does contain a "srcref" attribute:> parsed <- parse("sample.R")> length(parsed) > attr(parsed, "srcref")The R Journal Vol. 2/2, December 2010 ISSN 2073-4859
17.
C ONTRIBUTED R ESEARCH A RTICLES 17[[1]] The "srcfile" attribute is actually an environmentx <- 1:10 containing an encoding, a ﬁlename, a timestamp, and a working directory. These give information[[2]] about the ﬁle from which the parser was reading.for (i in x) { The reason it is an environment is that environments print(i) # Print each entry are reference objects: even though all three source ref-} erences contain this attribute, in actuality there is[[3]] only one copy stored. This was done to save mem-x ory, since there are often hundreds of source refer- ences from each ﬁle.Although it appears that the "srcref" attribute con- Source references in objects returned bytains the source, in fact it only references it, and the parse_Rd() use the same structure as those returnedprint.srcref() method retrieves it for printing. If by parse(). The main difference is that in Rd objectswe remove the class from each element, we see the source references are attached to every component,true structure: whereas parse() only constructs source references for complete statements, not for their component> lapply(attr(parsed, "srcref"), unclass) parts, and they are attached to the container of the statements. Thus for example a braced list of state-[[1]] ments processed by parse() will receive a "srcref"[1] 1 1 1 9 1 9 attribute containing source references for each state-attr(,"srcfile") ment within, while the statements themselves willsample.R not hold their own source references, and sub- expressions within each statement will not generate[[2]] source references at all. In contrast the "srcref" at-[1] 2 1 4 1 1 1attr(,"srcfile") tribute for a section in an ‘.Rd’ ﬁle will be a sourcesample.R reference for the whole section, and each component part in the section will have its own source reference.[[3]][1] 5 1 5 1 1 1attr(,"srcfile") Relation to the "source" attributesample.R By default the R parser also creates an attributeEach element is a vector of 6 integers: (ﬁrst line, ﬁrst named "source" when it parses a function deﬁnition.byte, last line, last byte, ﬁrst character, last character). When available, this attribute is used by default inThe values refer to the position of the source for each lieu of deparsing to display the function deﬁnition.element in the original source ﬁle; the details of the It is unrelated to the "srcref" attribute, which is in-source ﬁle are contained in a "srcfile" attribute on tended to point to the source, rather than to duplicateeach reference. the source. An integrated development environment The reason both bytes and characters are (IDE) would need to know the correspondence be-recorded in the source reference is historical. When tween R code in R and the true source, and "srcref"they were introduced, they were mainly used for re- attributes are intended to provide this.trieving source code for display; for this, bytes areneeded. Since R 2.9.0, they have also been used to When are "srcref" attributes added?aid in error messages. Since some characters take upmore than one byte, users need to be informed about As mentioned above, the parser adds a "srcref"character positions, not byte positions, and the last attribute by default. For this, it is assumes thattwo entries were added. options("keep.source") is left at its default setting The "srcfile" attribute is also not as simple as it of TRUE, and that parse() is given a ﬁlename as argu-looks. For example, ment file, or a character vector as argument text. In the latter case, there is no source ﬁle to refer-> srcref <- attr(parsed, "srcref")[[1]] ence, so parse() copies the lines of source into a> srcfile <- attr(srcref, "srcfile") "srcfilecopy" object, which is simply a "srcfile"> typeof(srcfile) object that contains a copy of the text.[1] "environment" Developers may wish to add source references in other situations. To do that, an object inheriting from> ls(srcfile) class "srcfile" should be passed as the srcfile ar- gument to parse().[1] "Enc" "encoding" "filename" The other situation in which source references[4] "timestamp" "wd" are likely to be created in R code is when callingThe R Journal Vol. 2/2, December 2010 ISSN 2073-4859
18.
18 C ONTRIBUTED R ESEARCH A RTICLESsource(). The source() function calls parse(), cre- In this simple example it is easy to see where theating the source references, and then evaluates the problem occurred, but in a more complex functionresulting code. At this point newly created functions it might not be so simple. To ﬁnd it, we can convertwill have source references attached to the body of the warning to an error usingthe function. > options(warn=2) The section “Breakpoints” below discusses howto make sure that source references are created in and then re-run the code to generate an error. Afterpackage code. generating the error, we can display a stack trace: > traceback()Using source references 5: doWithOneRestart(return(expr), restart) 4: withOneRestart(expr, restarts[[1L]])Error locations 3: withRestarts({ .Internal(.signalCondition(For the most part, users need not be concerned with simpleWarning(msg, call), msg, call))source references, but they interact with them fre- .Internal(.dfltWarn(msg, call))quently. For example, error messages make use of }, muffleWarning = function() NULL) at badabs.R#2them to report on the location of syntax errors: 2: .signalSimpleWarning("the condition has length > 1 and only the first element will be used",> source("error.R") quote(if (x < 0) x <- -x)) at badabs.R#3 1: badabs(c(5, -10))Error in source("error.R") : error.R:4:1: unexpectedelse To read a traceback, start at the bottom. We see our3: print( "less" ) call from the console as line “1:”, and the warning4: else being signalled in line “2:”. At the end of line “2:” ^ it says that the warning originated “at badabs.R#3”, i.e. line 3 of the ‘badabs.R’ ﬁle. A more recent addition is the use of source ref-erences in code being executed. When R evaluatesa function, it evaluates each statement in turn, keep- Breakpointsing track of any associated source references. As of Users may also make use of source references whenR 2.10.0, these are reported by the debugging sup- setting breakpoints. The trace() function lets us setport functions traceback(), browser(), recover(), breakpoints in particular R functions, but we need toand dump.frames(), and are returned as an attribute specify which function and where to do the setting.on each element returned by sys.calls(). For ex- The setBreakpoint() function is a more friendlyample, consider the function shown in Figure 2. front end that uses source references to construct a call to trace(). For example, if we wanted to set a1: # Compute the absolute value breakpoint on ‘badabs.R’ line 3, we could use2: badabs <- function(x) {3: if (x < 0) > setBreakpoint("badabs.R#3")4: x <- -x5: x D:svnpaperssrcrefsbadabs.R#3:6: } badabs step 2 in <environment: R_GlobalEnv> This tells us that we have set a breakpoint in step 2 of Figure 2: The contents of ‘badabs.R’. the function badabs found in the global environment. When we run it, we will see This function is syntactically correct, and worksto calculate the absolute value of scalar values, but is > badabs( c(5, -10) )not a valid way to calculate the absolute values of theelements of a vector, and when called it will generate badabs.R#3an incorrect result and a warning: Called from: badabs(c(5, -10))> source("badabs.R") Browse[1]>> badabs( c(5, -10) ) telling us that we have broken into the browser at the[1] 5 -10 requested line, and it is waiting for input. We could then examine x, single step through the code, or doWarning message: any other action of which the browser is capable.In if (x < 0) x <- -x : By default, most packages are built without the condition has length > 1 and only the first source reference information, because it adds quite element will be used substantially to the size of the code. However, settingThe R Journal Vol. 2/2, December 2010 ISSN 2073-4859
19.
C ONTRIBUTED R ESEARCH A RTICLES 19the environment variable R_KEEP_PKG_SOURCE=yes length 6 with a class and "srcfile" attribute. It isbefore installing a source package will tell R to keep hard to measure exactly how much space this takesthe source references, and then breakpoints may be because much is shared with other source references,set in package source code. The envir argument to but it is on the order of 100 bytes per reference.setBreakpoints() will need to be set in order to tell Clearly a more efﬁcient design is possible, at the ex-it to search outside the global environment when set- pense of moving support code to C from R. As part ofting breakpoints. this move, the use of environments for the "srcfile" attribute could be dropped: they were used as the only available R-level reference objects. For develop-The #line directive ers, this means that direct access to particular partsIn some cases, R source code is written by a program, of a source reference should be localized as much asnot by a human being. For example, Sweave() ex- possible: They should write functions to extract par-tracts lines of code from Sweave documents before ticular information, and use those functions wheresending the lines to R for parsing and evaluation. To needed, rather than extracting information directly.support such preprocessors, the R 2.10.0 parser rec- Then, if the implementation changes, only those ex-ognizes a new directive of the form tractor functions will need to be updated. Finally, source level debugging could be imple-#line nn "filename" mented to make use of source references, to singlewhere nn is an integer. As with the same-named step through the actual source ﬁles, rather than dis-directive in the C language, this tells the parser to playing a line at a time as the browser() does.assume that the next line of source is line nn fromthe given ﬁlename for the purpose of constructingsource references. The Sweave() function doesn’t Bibliographycurrently make use of this, but in the future, it (and D. Murdoch. Parsing Rd ﬁles. 2010. URL http:other preprocessors) could output #line directives //developer.r-project.org/parseRd.pdf.so that source references and syntax errors refer tothe original source location rather than to an inter- D. Murdoch and S. Urbanek. The new R help system.mediate ﬁle. The R Journal, 1/2:60–65, 2009. The #line directive was a late addition to R2.10.0. Support for this in Sweave() appeared in R L. Tierney. codetools: Code Analysis Tools for R, 2009. R2.12.0. package version 0.2-2.The future Duncan Murdoch Dept. of Statistical and Actuarial SciencesThe source reference structure could be improved. University of Western OntarioFirst, it adds quite a lot of bulk to R objects in mem- London, Ontario, Canadaory. Each source reference is an integer vector of murdoch@stats.uwo.caThe R Journal Vol. 2/2, December 2010 ISSN 2073-4859
20.
20 C ONTRIBUTED R ESEARCH A RTICLEShglm: A Package for Fitting HierarchicalGeneralized Linear Modelsby Lars Rönnegård, Xia Shen and Moudud Alam effects, respectively, in the linear predictor for the means and X.disp is the design matrix for the ﬁxed Abstract We present the hglm package for ﬁt- effects in the dispersion parameter. This enables a ting hierarchical generalized linear models. It more ﬂexible modeling of the random effects than can be used for linear mixed models and gener- specifying the model by an R formula. Consequently, alized linear mixed models with random effects this option is not as user friendly but gives the user for a variety of links and a variety of distribu- the possibility to ﬁt random regression models and tions for both the outcomes and the random ef- random effects with known correlation structure. fects. Fixed effects can also be ﬁtted in the dis- The hglm package produces estimates of ﬁxed persion part of the model. effects, random effects and variance components as well as their standard errors. In the output it also produces diagnostics such as deviance componentsIntroduction and leverages.The hglm package (Alam et al., 2010) implementsthe estimation algorithm for hierarchical general- Three illustrating modelsized linear models (HGLM; Lee and Nelder, 1996).The package ﬁts generalized linear models (GLM; The hglm package makes it possible toMcCullagh and Nelder, 1989) with random effects,where the random effect may come from a distribu- 1. include ﬁxed effects in a model for the residualtion conjugate to one of the exponential-family dis- variance,tributions (normal, gamma, beta or inverse-gamma).The user may explicitly specify the design matrices 2. ﬁt models where the random effect distributionboth for the ﬁxed and random effects. In conse- is not necessarily Gaussian,quence, correlated random effects, as well as randomregression models can be ﬁtted. The dispersion pa- 3. estimate variance components when we haverameter can also be modeled with ﬁxed effects. correlated random effects. The main function is hglm() and the input is spec- Below we describe three models that can be ﬁtted us-iﬁed in a similar manner as for glm(). For instance, ing hglm(), which illustrate these three points. Later,R> hglm(fixed = y ~ week, random = ~ 1|ID, in the Examples section, ﬁve examples are presented family = binomial(link = logit)) that include the R syntax and output for the hglm() function.ﬁts a logit model for y with week as ﬁxed effect and IDrepresenting the clusters for a normally distributedrandom intercept. Given an hglm object, the stan- Linear mixed model with ﬁxed effects indard generic functions are print(), summary() and the residual varianceplot(). We start by considering a normal-normal model with Generalized linear mixed models (GLMM) have heteroscedastic residual variance. In biology, for in-previously been implemented in several R functions, stance, this is important if we wish to model a ran-such as the lmer() function in the lme4 package dom genetic effect (e.g., Rönnegård and Carlborg,(Bates and Maechler, 2010) and the glmmPQL() func- 2007) for a trait y, where the residual variance differstion in the MASS package (Venables and Ripley, between the sexes.2002). In GLMM, the random effects are assumed For the response y and observation number i weto be Gaussian whereas the hglm() function allows have:other distributions to be speciﬁed for the randomeffect. The hglm() function also extends the ﬁtting yi | β, u, β d ∼ N ( Xi β + Zi u, exp ( Xd,i β d ))algorithm of the dglm package (Dunn and Smyth, 22009) by including random effects in the linear pre- u ∼ MVN 0, Iσudictor for the mean, i.e. it extends the algorithm sothat it can cope with mixed models. Moreover, the where β are the ﬁxed effects in the mean part of themodel speciﬁcation in hglm() can be given as a for- model, the random effect u represents random vari-mula or alternatively in terms of y, X, Z and X.disp. ation among clusters of observations and β d is theHere y is the vector of observed responses, X and ﬁxed effect in the residual variance part of the model.Z are the design matrices for the ﬁxed and random The variance of the random effect u is given by σu . 2The R Journal Vol. 2/2, December 2010 ISSN 2073-4859
21.
C ONTRIBUTED R ESEARCH A RTICLES 21The subscript i for the matrices X, Z, and Xd indi- where Z∗ = ZL and L is the Cholesky factorization ofcates the i’th row. Here, a log link function is used A.for the residual variance and the model for the resid- Thus the model can be ﬁtted using the hglm()ual variance is therefore given by exp( Xd,i β d ). In function with a user-speciﬁed input matrix Z (see Rthe more general GLM notation, the residual vari- code in Example 4 below).ance here is described by the dispersion term φ, sowe have log(φi ) = Xd,i β d . This model cannot be ﬁtted with the dglm pack- Overview of the ﬁtting algorithmage, for instance, because we have random effects inthe mean part of the model. It is also beyond the The ﬁtting algorithm is described in detail in Leescope of the lmer() function since we allow a model et al. (2006) and is summarized as follows. Let n befor the residual variance. the number of observations and k be the number of The implementation in hglm() for this model is levels in the random effect. The algorithm is then:demonstrated in Example 2 in the Examples section 1. Initialize starting values.below. 2. Construct an augmented model with responseA Poisson model with gamma distributed y y aug = . E(u)random effectsFor dependent count data it is common to model 3. Use a GLM to estimate β and v given the vec-a Poisson distributed response with a gamma dis- tor φ and the dispersion parameter for the ran-tributed random effect (Lee et al., 2006). If we assume dom effect λ. Save the deviance componentsno overdispersion conditional on u and thereby have and leverages from the ﬁtted model.a ﬁxed dispersion term, this model may be speciﬁed 4. Use a gamma GLM to estimate β d from theas: ﬁrst n deviance components d and leverages E (yi | β, u) = exp ( Xi β + Zi v) h obtained from the previous model. The re-where a level j in the random effect v is given by sponse variable and weights for this model arev j = log(u j ) and u j are iid with gamma distribution d/(1 − h) and (1 − h)/2, respectively. Updatehaving mean and variance: E(u j ) = 1, var (u j ) = λ. the dispersion parameter by putting φ equal to This model can also be ﬁtted with the hglm pack- the predicted response values for this model.age, since it extends existing GLMM functions (e.g. 5. Use a similar GLM as in Step 4 to estimate λlmer()) to allow a non-normal distribution for the from the last k deviance components and lever-random effect. Later on, in Example 3, we show the ages obtained from the GLM in Step 3.hglm() code used for ﬁtting a gamma-Poisson modelwith ﬁxed effects included in the dispersion parame- 6. Iterate between steps 3-5 until convergence.ter. For a more detailed description of the algorithm in a particular context, see below.A linear mixed model with a correlatedrandom effectIn animal breeding it is important to estimate vari- H-likelihood theoryance components prior to ranking of animal perfor- Let y be the response and u an unobserved randommances (Lynch and Walsh, 1998). In such models the effect. The hglm package ﬁts a hierarchical modelgenetic effect of each animal is modeled as a level y | u ∼ f m (µ, φ) and u ∼ f d (ψ, λ) where f m and f d arein a random effect and the correlation structure A is speciﬁed distributions for the mean and dispersiona matrix with known elements calculated from the parts of the model.pedigree information. The model is given by We follow the notation of Lee and Nelder (1996), which is based on the GLM terminology by McCul- 2 yi | β, u ∼ N Xi β + Zi u, σe lagh and Nelder (1989). We also follow the likelihood 2 approach where the model is described in terms of u ∼ MVN 0, Aσu likelihoods. The conditional (log-)likelihood for y given u has the form of a GLM This may be reformulated as (see Lee et al., 2006;Rönnegård and Carlborg, 2007) yθ − b(θ ) (θ , φ; y | u) = + c(y, φ) (1) a(φ) yi | β, u ∼ N Xi β + Zi∗ u∗ , σe 2 where θ is the canonical parameter, φ is the disper- u∗ ∼ MVN(0, Iσu ) 2 sion term, µ is the conditional mean of y given uThe R Journal Vol. 2/2, December 2010 ISSN 2073-4859
22.
22 C ONTRIBUTED R ESEARCH A RTICLESwhere η = g(µ ), i.e. g() is a link function for the Here, q is the number of columns in Z, 0q is a vec-GLM. The linear predictor is given by η = η + v tor of zeros of length q, and Iq is the identity matrixwhere η = Xβ and v = v(u) for some strict monotonic of size q × q. The variance-covariance matrix of thefunction of u. The link function v(u) should be spec- augmented residual vector is given byiﬁed so that the random effects occur linearly in thelinear predictor to ensure meaningful inference from Rσe2 0the h-likelihood (Lee et al., 2007). The h-likelihood V (ea ) = 2 0 Iq σuor hierarchical likelihood is deﬁned by 2 2 Given σe and σu , this weighted linear model gives h = (θ , φ; y | u) + (α; v) (2) the same estimates of the ﬁxed and random effectswhere (α; v) is the log density for v with parameter (b and u respectively) as Henderson’s mixed model ∂h equations (Henderson, 1976).α. The estimates of β and v are given by ∂β = 0 and∂h The estimates from weighted least squares are∂v= 0. The dispersion components are estimated by given by:maximizing the adjusted proﬁle h-likelihood T t W −1 T a δ = T t W −1 y a a ˆ a 1 1 hp = h− log | − H| (3) where W ≡ V (ea ). 2 2π ˆ ˆ β= β,v=v The two variance components are estimated iter-where H is the Hessian matrix of the h-likelihood. atively by applying a gamma GLM to the residualsThe dispersion term φ can be connected to a lin- ei and u2 with intercept terms included in the linear 2 iear predictor Xd β d given a link function gd (·) with predictors. The leverages hi for these models are cal-gd (φ) = Xd β d . The adjusted proﬁle likelihoods of culated from the diagonal elements of the hat matrix:and h may be used for inference of β, v and the dis-persion parameters φ and λ (pp. 186 in Lee et al., H a = T a ( T t W −1 T a ) −1 T t W −1 (5) a a2006). More detail and discussion of h-likelihoodtheory is presented in the hglm vignette. A gamma GLM is used to ﬁt the dispersion part of the model with responseDetailed description of the hglm ﬁtting al- 2gorithm for a linear mixed model with het- yd,i = ei /(1 − hi ) (6)eroscedastic residual variance 2 where E(yd ) = µd and µd ≡ φ (i.e. σe for a GaussianIn this section we describe the ﬁtting algorithm in de- reponse). The GLM model for the dispersion pa-tail for a linear mixed model where ﬁxed effects are rameter is then speciﬁed by the link function gd (.)included in the model for the residual variance. The and the linear predictor Xd β d , with prior weightsextension to distributions other than Gaussian is de- (1 − hi )/2, forscribed at the end of the section. Lee and Nelder (1996) showed that linear mixed gd ( µ d ) = Xd β d (7)models can be ﬁtted using a hierarchy of GLM byusing an augmented linear model. The linear mixed Similarly, a gamma GLM is ﬁtted to the dispersionmodel 2 term α (i.e. σu for a GLMM) for the random effect v, with y = Xb + Zu + e yα,j = u2 /(1 − hn+ j ), j = 1, 2, ..., q j (8) v= ZZ T σu 2 2 + Rσe andwhere R is a diagonal matrix with elements givenby the estimated dispersion model (i.e. φ deﬁned be- gα ( µ α ) = λ (9)low). In the ﬁrst iteration of the HGLM algorithm, R where the prior weights are (1 − hn+ j )/2 and the esti-is an identity matrix. The model may be written as mated dispersion term for the random effect is givenan augmented weighted linear model: − ˆ by α = gα 1 (λ). ˆ ya = Ta δ + ea (4) The algorithm iterates by updating both R = ˆ 2 ˆ diag(φ) and σu = α, and subsequently going back towhere Eq. (4). For a non-Gaussian response variable y, the esti- y X Z ya = Ta = mates are obtained simply by ﬁtting a GLM instead 0q 0 Iq of Eq. (4) and by replacing ei and u2 with the de- 2 j b e viance components from the augmented model (see δ= ea = u −u Lee et al., 2006).The R Journal Vol. 2/2, December 2010 ISSN 2073-4859
23.
C ONTRIBUTED R ESEARCH A RTICLES 23Implementation details tions in each cluster. For the mean part of the model, the simulated intercept value is µ = 0, the variance 2 for the random effect is σu = 0.2, and the residualDistributions and link functions 2 = 1.0 . variance is σeThere are two important classes of models that canbe ﬁtted in hglm: GLMM and conjugate HGLM.GLMMs have Gaussian random effects. ConjugateHGLMs have been commonly used partly due to thefact that explicit formulas for the marginal likelihood Both functions produce the same estimate ofexist. HGLMs may be used to ﬁt models in sur- the ﬁxed intercept effect of 0.1473 (s.e. 0.16)vival analysis (frailty models), where for instance the and also the same variance component estimates.complementary-log-log link function can be used on The summary.hglm() function gives the estimatebinary responses (see e.g., Carling et al., 2004). The of the variance component for the random in-gamma distribution plays an important role in mod- tercept (0.082) as well as the residual varianceeling responses with a constant coefﬁcient of varia- (0.84). It also gives the logarithm of the vari-tion (see Chapter 8 in McCullagh and Nelder, 1989). ance component estimates together with standardFor such responses with a gamma distributed ran- errors below the lines Model estimates for thedom effect we have a gamma-gamma model. A sum- dispersion term and Dispersion model for themary of the most important models is given in Tables random effects. The lme() function gives the1 and 2. Note that the random-effect distribution can square root of the variance component estimates.be an arbitrary conjugate exponential-family distri-bution. For the speciﬁc case where the random-effectdistribution is a conjugate to the distribution of y,this is called a conjugate HGLM . Further implemen-tation details can be found in the hglm vignette. The model diagnostics produced by the plot.hglm function are shown in Figures 1 and 2.Possible future developments The data are completely balanced and therefore pro- duce equal leverages (hatvalues) for all observationsIn the current version of hglm() it is possible to in- and also for all random effects (Figure 1). Moreover,clude a single random effect in the mean part of the the assumption of the deviance components beingmodel. An important development would be to in- gamma distributed is acceptable (Figure 2).clude several random effects in the mean part of themodel and also to include random effects in the dis-persion parts of the model. The latter class of modelsis called Double HGLM and has been shown to bea useful tool for modeling heavy tailed distributions(Lee and Nelder, 2006). The algorithm of hglm() gives true marginal like-lihood estimates for the ﬁxed effects in conjugate qqqqqHGLM (Lee and Nelder, 1996, pp. 629), whereas 0.4for other models the estimates are approximated.Lee and co-workers (see Lee et al., 2006, and refer-ences therein) have developed higher-order approx- 0.3imations, which are not implemented in the current hatvaluesversion of the hglm package. For such extensions,we refer to the commercially available GenStat soft- 0.2ware (Payne et al., 2007), the recently available Rpackage HGLMMM (Molas, 2010) and also to com-ing updates of hglm. 0.1Examples qqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqq 0 20 40 60 80 100Example 1: A linear mixed model IndexData description The output from the hglm() func-tion for a linear mixed model is compared to the re- Figure 1: Hatvalues (i.e. diagonal elements of thesults from the lme() function in the nlme (Pinheiro augmented hat-matrix) for each observation 1 to 100,et al., 2009) package using simulated data. In the sim- and for each level in the random effect (index 101-ulated data there are ﬁve clusters with 20 observa- 105).The R Journal Vol. 2/2, December 2010 ISSN 2073-4859
24.
24 C ONTRIBUTED R ESEARCH A RTICLES Table 1: Commonly used distributions and link functions possible to ﬁt with hglm() Model name y | u distribution Link g(µ) u distribution Link v(u) Linear mixed model Gaussian identity Gaussian identity Binomial conjugate Binomial logit Beta logit Binomial GLMM Binomial logit Gaussian identity Binomial frailty Binomial comp-log-log Gamma log Poisson GLMM Poisson log Gaussian identity Poisson conjugate Poisson log Gamma log Gamma GLMM Gamma log Gaussian identity Gamma conjugate Gamma inverse Inverse-Gamma inverse Gamma-Gamma Gamma log Gamma log Table 2: hglm code for commonly used models Model name Setting for family argument Setting for rand.family argument Linear mixed modela gaussian(link = identity) gaussian(link = identity) Beta-Binomial binomial(link = logit) Beta(link = logit) Binomial GLMM binomial(link = logit) gaussian(link = identity) Binomial frailty binomial(link = cloglog) Gamma(link = log) Poisson GLMM poisson(link = log) gaussian(link = identity) Poisson frailty poisson(link = log) Gamma(link = log) Gamma GLMM Gamma(link = log) gaussian(link = identity) Gamma conjugate Gamma(link = inverse) inverse.gamma(link = inverse) Gamma-Gamma Gamma(link = log) Gamma(link = log) a For example, the hglm() code for a linear mixed model is hglm(family = gaussian(link = identity), rand.family = gaussian(link = identity), ...) q q R> plot(lmm) 5 5 Call: Deviance Quantiles q q hglm.default(X = X, y = y, Z = Z) 4 4 q q q q q qDeviances q q q q 3 3 q q q q q q q q DISPERSION MODEL 2 2 q q q q q q q q qqqq WARNING: h-likelihood estimates through EQL can be biased. q q q q qq qq q q q q Model estimates for the dispersion term:[1] 0.8400608 1 1 q q qq qq q q q qq q q q qq qq q q q q q q qq q qq qq q q q q qq qq q q q q q q qq q q q q q qq qq q q q q q q qq q q q qq qq q q q q q q q q q qq q q q q q q qq q q q qqq q qq qq q q q q q qq q 0 0 q q q qq q q q qq q q q qq q q q Model estimates for the dispersion term: 0 20 40 60 80 100 0 1 2 3 4 5 Link = log Index Gamma Quantiles Effects: Estimate Std. Error -0.1743 0.1441Figure 2: Deviance diagnostics for each observationand each level in the random effect. Dispersion = 1 is used in Gamma model on deviances to calculate the standard error(s). Dispersion parameter for the random effects The R code and output for this example is as fol- [1] 0.08211lows: Dispersion model for the random effects:R> set.seed(123) Link = logR> n.clus <- 5 #No. of clusters Effects:R> n.per.clus <- 20 #No. of obs. per cluster Estimate Std. ErrorR> sigma2_u <- 0.2 #Variance of random effect -2.4997 0.8682R> sigma2_e <- 1 #Residual variance Dispersion = 1 is used in Gamma model on deviancesR> n <- n.clus*n.per.clus to calculate the standard error(s).R> X <- matrix(1, n, 1)R> Z <- diag(n.clus)%x%rep(1, n.per.clus) MEAN MODEL Summary of the fixed effects estimatesR> a <- rnorm(n.clus, 0, sqrt(sigma2_u)) Estimate Std. Error t value Pr(>|t|)R> e <- rnorm(n, 0, sqrt(sigma2_e)) X.1 0.1473 0.1580 0.933 0.353R> mu <- 0 Note: P-values are based on 96 degrees of freedom Summary of the random effects estimateR> y <- mu + Z%*%a + e Estimate Std. ErrorR> lmm <- hglm(y = y, X = X, Z = Z) [1,] -0.3237 0.1971R> summary(lmm) [2,] -0.0383 0.1971The R Journal Vol. 2/2, December 2010 ISSN 2073-4859
25.
C ONTRIBUTED R ESEARCH A RTICLES 25[3,] 0.3108 0.1971 Intercept -0.3225 0.2040[4,] -0.0572 0.1971 x_d 1.4744 0.2881[5,] 0.1084 0.1971 Dispersion = 1 is used in Gamma model on deviancesEQL estimation converged in 5 iterations. to calculate the standard error(s). Dispersion parameter for the random effectsR> #Same analysis with the lme function [1] 0.1093R> library(nlme) Dispersion model for the random effects:R> clus <- rep(1:n.clus, Link = log+ rep(n.per.clus, n.clus)) Effects:R> summary(lme(y ~ 0 + X, Estimate Std. Error -2.2135 0.8747+ random = ~ 1 | clus)) Dispersion = 1 is used in Gamma model on deviancesLinear mixed-effects model fit by REML to calculate the standard error(s).Data: NULL MEAN MODEL AIC BIC logLik Summary of the fixed effects estimates 278.635 286.4203 -136.3175 Estimate Std. Error t value Pr(>|t|) X.1 -0.0535 0.1836 -0.291 0.771Random effects: Note: P-values are based on 96 degrees of freedomFormula: ~1 | clus Summary of the random effects estimate (Intercept) Residual Estimate Std. ErrorStdDev: 0.2859608 0.9166 [1,] 0.0498 0.2341 [2,] -0.2223 0.2276Fixed effects: y ~ 0 + X [3,] 0.4404 0.2276 Value Std.Error DF t-value p-value [4,] -0.1786 0.2276X 0.1473009 0.1573412 95 0.9361873 0.3516 [5,] -0.0893 0.2296Standardized Within-Group Residuals: EQL estimation converged in 5 iterations. Min Q1 Med Q3 Max-2.5834807 -0.6570612 0.0270673 0.6677986 2.1724148Number of Observations: 100 Example 3: Fitting a Poisson model withNumber of Groups: 5 gamma random effects, and ﬁxed effects in the dispersion termExample 2: Analysis of simulated data for Data description We simulate a Poisson modela linear mixed model with heteroscedastic with random effects and estimate the parameter inresidual variance the dispersion term for an explanatory variable xd . The estimated dispersion parameter for the randomData description Here, a heteroscedastic residual effects is 0.6556. (Code continued from Example 2)variance is added to the simulated data from the pre-vious example. Given the explanatory variable xd , R> u <- rgamma(n.clus,1)the simulated residual variance is 1.0 for xd = 0 and R> eta <- exp(mu + Z%*%u)2.72 for xd = 1. The output shows that the vari- R> y <- rpois(length(eta), eta) ˆance of the random effect is 0.109, and that β d = R> gamma.pois <- hglm(y = y, X = X, Z = Z,(−0.32, 1.47), i.e. the two residual variances are es- + X.disp = X_d,timated as 0.72 and 3.16. (Code continued from Ex- + family = poisson(ample 1) + link = log), + rand.family =R> beta.disp <- 1 + Gamma(link = log))R> X_d <- matrix(1, n, 2) R> summary(gamma.pois)R> X_d[,2] <- rbinom(n, 1, .5)R> colnames(X_d) <- c("Intercept", "x_d") Call:R> e <- rnorm(n, 0, hglm.default(X = X, y = y, Z = Z,+ sqrt(sigma2_e*exp(beta.disp*X_d[,2]))) family = poisson(link = log), rand.family = Gamma(link = log), X.disp = X_d)R> y <- mu + Z%*%a + eR> summary(hglm(y = y, X = X, Z = Z, DISPERSION MODEL+ X.disp = X_d)) WARNING: h-likelihood estimates through EQL can be biased. Model estimates for the dispersion term:Call: Link = loghglm.default(X = X, y = y, Z = Z, X.disp = X_d) Effects: Estimate Std. ErrorDISPERSION MODEL Intercept -0.0186 0.2042WARNING: h-likelihood estimates through EQL can be biased. x_d 0.4087 0.2902Model estimates for the dispersion term:Link = log Dispersion = 1 is used in Gamma model on deviancesEffects: to calculate the standard error(s). Estimate Std. Error Dispersion parameter for the random effectsThe R Journal Vol. 2/2, December 2010 ISSN 2073-4859
26.
26 C ONTRIBUTED R ESEARCH A RTICLES[1] 1.926 vidual j in the ordered pedigree ﬁle Z0 [i, j] = 1, and all other elements are 0. Let L be the Cholesky factor-Dispersion model for the random effects:Link = log ization of A, and Z = Z0 L. The design matrix for theEffects: ﬁxed effects, X, is a column of ones. The estimated Estimate Std. Error ˆ2 ˆ2 variance components are σe = 2.21 and σu = 1.50. 0.6556 0.7081 The R code for this example is given below.Dispersion = 1 is used in Gamma model on deviances R> data(QTLMAS)to calculate the standard error(s).MEAN MODEL R> y <- QTLMAS[,1]Summary of the fixed effects estimates R> Z <- QTLMAS[,2:2026] Estimate Std. Error t value Pr(>|t|) R> X <- matrix(1, 1000, 1)X.1 2.3363 0.6213 3.76 0.000293--- R> animal.model <- hglm(y = y, X = X, Z = Z) R> print(animal.model)Note: P-values are based on 95 degrees of freedomSummary of the random effects estimate Call: Estimate Std. Error hglm.default(X = X, y = y, Z = Z)[1,] 1.1443 0.6209[2,] -1.6482 0.6425 Fixed effects:[3,] -2.5183 0.6713 X.1[4,] -1.0243 0.6319 7.279766[5,] 0.2052 0.6232 Random effects: [1] -1.191733707 1.648604776 1.319427376 -0.928258503EQL estimation converged in 3 iterations. [5] -0.471083317 -1.058333534 1.011451565 1.879641994 [9] 0.611705900 -0.259125073 -1.426788944 -0.005165978 ...Example 4: Incorporating correlated ran- Dispersion parameter for the mean model:[1] 2.211169dom effects in a linear mixed model - a ge- Dispersion parameter for the random effects:[1] 1.502516netics example EQL estimation converged in 2 iterationsData description The data consists of 2025 indi-viduals from two generations where 1000 individ- Example 5: Binomial-beta model applieduals have observed trait values y that are approxi- to seed germination datamately normal (Figure 3). The data we analyze wassimulated for the QTLMAS 2009 Workshop (Coster Data description The seed germination data pre-et al., 2010)1 . A longitudinal growth trait was sim- sented by Crowder (1978) has previously been ana-ulated. For simplicity we analyze only the val- lyzed using a binomial GLMM (Breslow and Clay-ues given on the third occasion at age 265 days. ton, 1993) and a binomial-beta HGLM (Lee and Nelder, 1996). The data consists of 831 observations from 21 germination plates. The effect of seed vari- ety and type of root extract was studied in a 2 × 2 200 q qqq q qqq q q factorial lay-out. We ﬁt the binomial-beta HGLM 12 q qq q qq q q q used by Lee and Nelder (1996) and setting fix.disp q q q q q Sample Quantiles 150 qq qq qq q q q qq q qq q q qq q q 10 q qq q q q = 1 in hglm() produces comparable estimates to the q q qFrequency q q q qq qq q q q q q q q q q qq q q q qq q q q q qq q q qq 100 qq q ones obtained by Lee and Nelder (with differences q qq q q qq q qq q 8 qq q q q qq q q qq q q q qq q q q q qq q q < 2 × 10−3 ). The beta distribution parameter α in Lee q qq q q q qq q q qq q q q qq q qq q q q q q qq q 6 q qq q q q q qq q qq q q qq q q 50 q q q qq q qq q q q and Nelder (1996) was deﬁned as 1/(2a) where a is q q q qq q q q q qq q q q q q qq q qq qq qq qq qq qq qq q q 4 q qq q qq q qq q q qq qq q qq qq qq qq q qq qq q the dispersion term obtained from hglm(). The out- 0 q 2 ˆ put from the R code given below gives a = 0.0248 and 2 4 6 8 10 14 −3 −1 0 1 2 3 the corresponding estimate given in Lee and Nelder y Theoretical Quantiles ˆ (1996) is a = 1/(2ˆ ) = 0.023. We conclude that the α hglm package produces similar results as the onesFigure 3: Histogram and qqplot for the analyzed presented in Lee and Nelder (1996) and the disper-trait. sion parameters estimated using the EQL method in GenStat differ by less than 1%. Additional examples, We ﬁtted a model with a ﬁxed intercept and a together with comparisons to estimates produced byrandom animal effect, a, where the correlation struc- GenStat, are given in the hglm vignette included inture of a is given by the additive relationhip matrix the package on CRAN.A (which is obtained from the available pedigree in-formation). An incidence matrix Z0 was constructed R> data(seeds)and relates observation number with id-number in R> germ <- hglm(the pedigree. For observation yi coming from indi- + fixed = r/n ~ extract*I(seed=="O73"), 1 http://www.qtlmas2009.wur.nl/UK/DatasetThe R Journal Vol. 2/2, December 2010 ISSN 2073-4859
27.
C ONTRIBUTED R ESEARCH A RTICLES 27+ weights = n, data = seeds, 2005), credit risk modeling (Alam and Carling, 2008),+ random = ~1|plate, family = binomial(), count data (Lee et al., 2006) and dichotomous re-+ rand.family = Beta(), fix.disp = 1) sponses (Noh and Lee, 2007). We therefore expectR> summary(germ) that this new package will be of use for applied statis-Call: ticians in several different ﬁelds.hglm.formula(family = binomial(), rand.family = Beta(), fixed = r/n ~ extract * I(seed == "O73"), random = ~1 | plate, data = seeds, weights = n, fix.disp = 1) BibliographyDISPERSION MODEL M. Alam and K. Carling. Computationally feasibleWARNING: h-likelihood estimates through EQL can be biased. estimation of the covariance structure in general-Model estimates for the dispersion term:[1] 1 ized linear mixed models GLMM. Journal of Sta-Model estimates for the dispersion term: tistical Computation and Simulation, 78:1227–1237,Link = log 2008.Effects:[1] 1 M. Alam, L. Ronnegard, and X. Shen. hglm: Hierar- chical Generalized Linear Models, 2010. URL http:Dispersion = 1 is used in Gamma model on deviances tocalculate the standard error(s). //CRAN.R-project.org/package=hglm. R packageDispersion parameter for the random effects version 1.1.1.[1] 0.02483 D. Bates and M. Maechler. lme4: Linear mixed-effectsDispersion model for the random effects: models using S4 classes, 2010. URL http://CRAN.Link = log R-project.org/package=lme4. R package versionEffects: 0.999375-37. Estimate Std. Error -3.6956 0.5304 N. E. Breslow and D. G. Clayton. Approximate infer- ence in generalized linear mixed models. Journal ofDispersion = 1 is used in Gamma model on deviances tocalculate the standard error(s). the American Statistical Association, 88:9–25, 1993.MEAN MODELSummary of the fixed effects estimates K. Carling, L. Rönnegård, and K. Roszbach. An Estimate Std. Error t value analysis of portfolio credit risk when counterpar-(Intercept) -0.5421 0.1928 -2.811 ties are interdependent within industries. SverigesextractCucumber 1.3386 0.2733 4.898I(seed == "O73")TRUE 0.0751 0.3114 0.241 Riksbank Working Paper, 168, 2004.extractCucumber:I(seed=="O73") -0.8257 0.4341 -1.902 Pr(>|t|) A. Coster, J. Bastiaansen, M. Calus, C. Maliepaard,(Intercept) 0.018429 and M. Bink. QTLMAS 2010: Simulated dataset.extractCucumber 0.000625 BMC Proceedings, 4(Suppl 1):S3, 2010.I(seed == "O73")TRUE 0.814264extractCucumber:I(seed=="O73") 0.086343--- M. J. Crowder. Beta-binomial ANOVA for propor- tions. Applied Statistics, 27:34–37, 1978.Note: P-values are based on 10 degrees of freedomSummary of the random effects estimate P. K. Dunn and G. K. Smyth. dglm: Double generalized Estimate Std. Error linear models, 2009. URL http://CRAN.R-project. [1,] -0.2333 0.2510 [2,] 0.0085 0.2328 org/package=dglm. R package version 1.6.1. ...[21,] -0.0499 0.2953 I. D. Ha and Y. Lee. Comparison of hierarchical likeli- hood versus orthodox best linear unbiased predic-EQL estimation converged in 7 iterations. tor approaches for frailty models. Biometrika, 92: 717–723, 2005.Summary C. R. Henderson. A simple method for comput- ing the inverse of a numerator relationship matrixThe hierarchical generalized linear model approach used in prediction of breeding values. Biometrics,offers new possibilities to ﬁt generalized linear mod- 32(1):69–83, 1976.els with random effects. The hglm package extendsexisting GLMM ﬁtting algorithms to include ﬁxed ef- Y. Lee and J. A. Nelder. Double hierarchical general-fects in a model for the residual variance, ﬁts mod- ized linear models with discussion. Applied Statis-els where the random effect distribution is not neces- tics, 55:139–185, 2006.sarily Gaussian and estimates variance componentsfor correlated random effects. For such models there Y. Lee and J. A. Nelder. Hierarchical generalized lin-are important applications in, for instance: genet- ear models with discussion. J. R. Statist. Soc. B, 58:ics (Noh et al., 2006), survival analysis (Ha and Lee, 619–678, 1996.The R Journal Vol. 2/2, December 2010 ISSN 2073-4859
28.
28 C ONTRIBUTED R ESEARCH A RTICLESY. Lee, J. A. Nelder, and Y. Pawitan. Generalized linear L. Rönnegård and Ö. Carlborg. Separation of base al- models with random effects. Chapman & Hall/CRC, lele and sampling term effects gives new insights 2006. in variance component QTL analysis. BMC Genet- ics, 8(1), 2007.Y. Lee, J. A. Nelder, and M. Noh. H-likelihood: prob- lems and solutions. Statistics and Computing, 17: 49–55, 2007. W. N. Venables and B. D. Ripley. Modern Applied Statistics with S. Springer, New York, fourth edi-M. Lynch and B. Walsh. Genetics and analysis of Quan- tion, 2002. URL http://www.stats.ox.ac.uk/ titative Traits. Sinauer Associates, Inc., 1998. ISBN pub/MASS4. ISBN 0-387-95457-0. 087893481.P. McCullagh and J. A. Nelder. Generalized linear mod- els. Chapman & Hall/CRC, 1989. Lars RönnegårdM. Molas. HGLMMM: Hierarchical Generalized Linear Statistics Unit Models, 2010. URL http://CRAN.R-project.org/ Dalarna University, Sweden package=HGLMMM. R package version 0.1.1. and Department of Animal Breeding and GeneticsM. Noh and Y. Lee. REML estimation for binary data Swedish University of Agricultural Sciences, Sweden in GLMMs. Journal of Multivariate Analysis, 98:896– lrn@du.se 915, 2007.M. Noh, B. Yip, Y. Lee, and Y. Pawitan. Multicompo- Xia Shen nent variance estimation for binary traits in family- Department of Cell and Molecular Biology based studies. Genetic Epidemiology, 30:37–47, 2006. Uppsala University, Sweden andR. W. Payne, D. A. Murray, S. A. Harding, D. B. Baird, Statistics Unit and D. M. Soutar. GenStat for Windows (10th edi- Dalarna University, Sweden tion) introduction, 2007. URL http://www.vsni. xia.shen@lcb.uu.se co.uk/software/genstat.J. Pinheiro, D. Bates, S. DebRoy, D. Sarkar, and the Moudud Alam R Core team. nlme: Linear and Nonlinear Mixed Ef- Statistics Unit fects Models, 2009. URL http://CRAN.R-project. Dalarna University, Sweden org/package=nlme. R package version 3.1-96. maa@du.seThe R Journal Vol. 2/2, December 2010 ISSN 2073-4859
29.
C ONTRIBUTED R ESEARCH A RTICLES 29dclone: Data Cloning in Rby Péter Sólymos One can use MCMC methods to calculate the pos- terior distribution of the model parameters (θ) condi- Abstract The dclone R package contains low tional on the data. Under regularity conditions, if k level functions for implementing maximum like- is large, the posterior distribution corresponding to lihood estimating procedures for complex mod- k clones of the observations is approximately normal els using data cloning and Bayesian Markov ˆ with mean θ and variance 1/k times the inverse of Chain Monte Carlo methods with support for the Fisher information matrix. When k is large, the JAGS, WinBUGS and OpenBUGS. mean of this posterior distribution is the maximum likelihood estimate and k times the posterior vari- ance is the corresponding asymptotic variance of theIntroduction maximum likelihood estimate if the parameter space is continuous. When some of the parameters are onHierarchical models, including generalized linear the boundaries of their feasible space (Stram and Lee,models with mixed random and ﬁxed effects, are 1994), point estimates can be correct, but currentlyincreasingly popular. The rapid expansion of ap- the Fisher information cannot be estimated correctlyplications is largely due to the advancement of the by using data cloning. This is an area for further re-Markov Chain Monte Carlo (MCMC) algorithms and search, but such situations challenge other comput-related software (Gelman et al., 2003; Gilks et al., ing techniques as well.1996; Lunn et al., 2009). Data cloning is a statistical Data cloning is a computational algorithm tocomputing method introduced by Lele et al. (2007). It compute maximum likelihood estimates and the in-exploits the computational simplicity of the MCMC verse of the Fisher information matrix, and is relatedalgorithms used in the Bayesian statistical frame- to simulated annealing (Brooks and Morgan, 1995).work, but it provides the maximum likelihood point By using data cloning, the statistical accuracy of theestimates and their standard errors for complex hi- estimator remains a function of the sample size anderarchical models. The use of the data cloning al- not of the number of cloned copies. Data cloninggorithm is especially valuable for complex models, does not improve the statistical accuracy of the esti-where the number of unknowns increases with sam- mator by artiﬁcially increasing the sample size. Theple size (i.e. with latent variables), because inference data cloning procedure avoids the analytical or nu-and prediction procedures are often hard to imple- merical evaluation of high dimensional integrals, nu-ment in such situations. merical optimization of the likelihood function, and The dclone R package (Sólymos, 2010) provides numerical computation of the curvature of the like-infrastructure for data cloning. Users who are fa- lihood function. Interested readers should consultmiliar with Bayesian methodology can instantly use Lele et al. (2007, 2010) for more details and mathe-the package for maximum likelihood inference and matical proofs for the data cloning algorithm.prediction. Developers of R packages can build onthe low level functionality provided by the pack-age to implement more speciﬁc higher level estima- The data cloning algorithmtion procedures for users who are not familiar withBayesian methodology. This paper demonstrates the Consider the following Poisson generalized linearimplementation of the data cloning algorithm, and mixed model (GLMM) with a random intercept forpresents a case study on how to write high level func- i.i.d. observations of Yi counts from i = 1, 2, . . . , n lo-tions for speciﬁc modeling problems. calities: αi ∼ normal 0, σ2Theory of data cloning λi = exp αi + X T β iImagine a hypothetical situation where an experi- Yi | λi ∼ Poisson (λi )ment is repeated by k different observers, and all kexperiments happen to result in exactly the same set The corresponding code for the simulation with β =of observations, y(k) = (y, y, . . . , y). The likelihood (1.8, −0.9), σ = 0.2, xi ∼ U (0, 1) is:function based on the combination of the data fromthese k experiments is L(θ, y(k) ) = [ L (θ, y)]k . The lo- > library(dclone) > set.seed(1234)cation of the maximum of L(θ, y(k) ) exactly equals > n <- 50the location of the maximum of the function L (θ, y), > beta <- c(1.8, -0.9)and the Fisher information matrix based on this like- > sigma <- 0.2lihood is k times the Fisher information matrix based > x <- runif(n, min = 0, max = 1)on L (θ, y). > X <- model.matrix(~ x)The R Journal Vol. 2/2, December 2010 ISSN 2073-4859
30.
30 C ONTRIBUTED R ESEARCH A RTICLES> alpha <- rnorm(n, mean = 0, sd = sigma) > glmm.model.bugs <- function() {> lambda <- exp(alpha + drop(X %*% beta)) + for (i in 1:n) {> Y <- rpois(n, lambda) + Y[i] ~ dpois(lambda[i]) + lambda[i] <- exp(alpha[i] + The ﬁrst step in the data cloning algorithm is + inprod(X[i,], beta[1,]))to construct the full Bayesian model of the prob- + alpha[i] ~ dnorm(0, tau) %_% I(-5, 5)lem with proper prior distributions for unknown pa- + }rameters. We use ﬂat normal priors for βs and for + for (j in 1:np) {log (σ ). First we use the rjags (Plummer, 2010b) and + beta[1,j] ~ dnorm(0, 0.01) %_% I(-5, 5)coda (Plummer et al., 2010) R packages and the JAGS + }(Plummer, 2010a) software for model ﬁtting. But the + log.sigma ~ dnorm(0, 0.01) %_% I(-5, 5)dclone package also supports WinBUGS (Spiegelhal- + sigma <- exp(log.sigma) + tau <- 1 / pow(sigma, 2)ter et al., 2003) and OpenBUGS (Spiegelhalter et al., + }2007) via the R packages R2WinBUGS (Sturtz et al.,2005) and BRugs (Thomas et al., 2006), respectively. In the bugs.fit function, the settings besides theThe corresponding model in the BUGS language is: data, params, model, and inits arguments follow the settings in the bugs/openbugs functions in the> glmm.model <- function() { R2WinBUGS package. This leads to some differ-+ for (i in 1:n) { ences between the arguments of the jags.fit and+ Y[i] ~ dpois(lambda[i]) the bugs.fit functions. For example bugs.fit uses+ lambda[i] <- exp(alpha[i] ++ inprod(X[i,], beta[1,])) n.thin instead of thin, and n.burnin is equivalent to+ alpha[i] ~ dnorm(0, tau) n.adapt + n.update as compared to jags.fit. The+ } bugs.fit can return the results either in "mcmc.list"+ for (j in 1:np) { or "bugs" format. The reason for leaving different ar-+ beta[1,j] ~ dnorm(0, 0.001) guments for jags.fit and bugs.fit is that the aim of+ } the dclone package is not to make the MCMC plat-+ log.sigma ~ dnorm(0, 0.001) forms interchangeable, but to provide data cloning+ sigma <- exp(log.sigma) facility for each. It is easy to adapt an existing BUGS+ tau <- 1 / pow(sigma, 2) code for data cloning, but it sometimes can be tricky+ } to adapt a JAGS code to WinBUGS and vice versa, Note that instead of writing the model into a ﬁle, because of differences between the two dialects (i.e.we store it as an R function (see JAGS and Win- truncation, censoring, autoregressive priors, etc., seeBUGS documentation for how to correctly specify Plummer (2010b)).the model in the BUGS language). Although the Here are the results from the three MCMC plat-BUGS and R syntaxes seem similar, the BUGS model forms:function cannot be evaluated within R. Storing the > mod.wb <- bugs.fit(dat, c("beta", "sigma"),BUGS model as an R function is handy, because the + glmm.model.bugs, DIC = FALSE, n.thin = 1)user does not have to manage different ﬁles when > mod.ob <- bugs.fit(dat, c("beta", "sigma"),modeling. Nevertheless, the model can be supplied + glmm.model.bugs, program = "openbugs",in a separate ﬁle by giving its name as character. + DIC = FALSE, n.thin = 1) We also have to deﬁne the data as elements of a > sapply(list(JAGS = mod, WinBUGS = mod.wb,named list along with the names of nodes that we + OpenBUGS = mod.ob), coef)want to monitor (we can also set up initial values, JAGS WinBUGS OpenBUGSnumber of burn-in iterations, number of iterations beta[1] 1.893 1.910 1.9037for the posterior sample, thinning values, etc.; see beta[2] -1.050 -1.074 -1.0375dclone package documentation for details). Now we sigma 0.161 0.130 0.0732can do the Bayesian inference by calling the jags.fit The idea in the next step of the data cloning al-function: gorithm is that instead of using the likelihood for the> dat <- list(Y = Y, X = X, n = n, observed data, we use the likelihood corresponding+ np = ncol(X)) to k copies (clones) of the data. Actually cloning (re-> mod <- jags.fit(dat, peating) the data k times is important if the model+ c("beta", "sigma"), glmm.model, n.iter = 1000) includes unobserved (latent) variables: in this wayThe output mod is an "mcmc.list" object, which can latent variables are cloned as well, thus contributingbe explored by methods such as summary or plot pro- to the likelihood. We can use the rep function to re-vided by the coda package. peat the data vectors, but it is less convenient for e.g. The dclone package provides the bugs.fit wrap- matrices or data frames. Thus, there is the dcloneper function for WinBUGS/OpenBUGS. The BUGS generic function with methods for various R objectmodel needs to be changed to run smoothly in Win- classes:BUGS/OpenBUGS: > dclone(1:5, 1)The R Journal Vol. 2/2, December 2010 ISSN 2073-4859
31.
C ONTRIBUTED R ESEARCH A RTICLES 31[1] 1 2 3 4 5 Similarly, the bugs.fit function takes care of the cloning information passed through the data argu-> dclone(1:5, 2) ment: > mod.wb2 <- bugs.fit(dat2, c("beta", "sigma"), [1] 1 2 3 4 5 1 2 3 4 5 + glmm.model.bugs, DIC = FALSE, n.thin = 1)attr(,"n.clones") > mod.ob2 <- bugs.fit(dat2, c("beta", "sigma"),[1] 2 + glmm.model.bugs, program = "openbugs",attr(,"n.clones")attr(,"method") + DIC = FALSE, n.thin = 1)[1] "rep" And here are the results based on k = 2 for the three> dclone(matrix(1:4, 2, 2), 2) MCMC platforms: > sapply(list(JAGS = mod2, WinBUGS = mod.wb2, [,1] [,2] + OpenBUGS = mod.ob2), coef)[1,] 1 3[2,] 2 4 JAGS WinBUGS OpenBUGS[3,] 1 3 beta[1] 1.918 1.905 1.896[4,] 2 4 beta[2] -1.114 -1.080 -1.078attr(,"n.clones") sigma 0.207 0.187 0.243[1] 2attr(,"n.clones")attr(,"method") For some models, indexing can be more complex,[1] "rep" and simple repetitions of the data ("rep" method) are not appropriate. In case of non independent> dclone(data.frame(a=1:2, b=3:4), 2) data (time series or spatial autoregressive models), cloning should be done over an extra dimension to a b ensure that clones are independent. For this purpose,1_1 1 3 one can use the dcdim function:2_1 2 41_2 1 3 > (obj <- dclone(dcdim(data.matrix(1:5)), 2))2_2 2 4 clone.1 clone.2 [1,] 1 1The number of clones can be extracted by the [2,] 2 2nclones function; it returns NULL for k = 1 and k oth- [3,] 3 3erwise. [4,] 4 4 The BUGS data speciﬁcation might contain some [5,] 5 5elements that we do not want to clone (e.g. "np", the attr(,"n.clones") [1] 2number of columns of the design matrix in this case). attr(,"n.clones")attr(,"method")Thus the dclone method has different behaviour for [1] "dim"lists, than for non list classes (including data frames). attr(,"n.clones")attr(,"method")attr(,"drop")We can deﬁne which elements should not be cloned, [1] TRUEor which should be multiplied by k instead of beingcloned k times. If data cloning consists of repetitions of the data, our BUGS model usually does not need modiﬁcations. If> dat2 <- dclone(dat, n.clones = 2, we add an extra dimension to the data, the BUGS+ multiply = "n", unchanged = "np") model and the data speciﬁcation must reﬂect the ex-> nclones(dat2) tra dimension, too. To demonstrate this, we consider a model and[1] 2 data set from Ponciano et al. (2009). They used theattr(,"method") single-species population growth data from labora- Y X n np "rep" "rep" "multi" NA tory experiments of Gause (1934) with Paramecium aurelia. Gause initiated liquid cultures on day 0 at aThe "method" attribute of the cloned object stores concentration of two individuals per 0.5 cm3 of cul-this information. There are three different ways of ture media. Then, on days 2–19, he took daily 0.5cloning (besides NA standing for unchanged): "rep" cm3 samples of the microbe cultures and counted theis for (longitudinal) repetitions, "multi" is for mul- number of cells in each sample. Ponciano et al. (2009)tiplication, and "dim" is repeating the data along an ﬁtted discrete time stochastic models of populationextra dimension (see later). dynamics to describe Gause’s data taking into ac- Now we do the model ﬁtting with k = 2. The count both process noise and observation error. The"mcmc.list" object inherits the information about Beverton-Holt model incorporates a latent variablethe cloning: component (Nt , t = 0, 1, . . . , q) to describe an unob- served time series of actual population abundance.> mod2 <- jags.fit(dat2, The latent variable component contains density de-+ c("beta", "sigma"), glmm.model, n.iter = 1000) pendence (β) and stochastic process noise (σ2 ). TheThe R Journal Vol. 2/2, December 2010 ISSN 2073-4859
32.
32 C ONTRIBUTED R ESEARCH A RTICLESmodel incorporates a Poisson observation compo- number of clones, we can improve MCMC conver-nent to account for variability caused by sampling: gence by making the priors more informative during the iterative ﬁtting process. We achieve this by modi- µt = log(λ) + log( Nt−1 ) − log(1 + βNt−1 ) fying the BUGS model for the Poisson GLMM exam- log( Nt ) ∼ normal(µt , σ2 ) ple: Yt | Nt ∼ Poisson( Nt ) > glmm.model.up <- function() { + for (i in 1:n) {λ is the ﬁnite rate of increase in population abun- + Y[i] ~ dpois(lambda[i])dance. The corresponding BUGS model is: + lambda[i] <- exp(alpha[i] + + inprod(X[i,], beta[1,]))> beverton.holt <- function() { + alpha[i] ~ dnorm(0, 1/sigma^2)+ for (j in 1:k) { + }+ for(i in 2:(n+1)){ + for (j in 1:np) {+ Y[(i-1),j] ~ dpois(exp(log.N[i,j])) + beta[1,j] ~ dnorm(pr[j,1], pr[j,2])+ log.N[i,j] ~ dnorm(mu[i,j], 1 / sigma^2) + }+ mu[i,j] <- log(lambda) + log.N[(i-1),j] + log.sigma ~ dnorm(pr[(np+1),1], pr[(np+1),2])+ - log(1 + beta * exp(log.N[(i-1),j])) + sigma <- exp(log.sigma)+ } + tau <- 1 / pow(sigma, 2)+ log.N[1,j] ~ dnorm(mu0, 1 / sigma^2) + }+ }+ beta ~ dlnorm(-1, 1) We also deﬁne a function to update the priors. The+ sigma ~ dlnorm(0, 1) function returns values for ﬂat prior speciﬁcation in+ tmp ~ dlnorm(0, 1) the ﬁrst iteration, and uses the updated posterior+ lambda <- tmp + 1 means (via the coef method) and data cloning stan-+ mu0 <- log(lambda) + log(2) - log(1 + beta * 2) dard errors (via the dcsd method) in the rest, be-+ } cause priors that have large probability mass near the maximum likelihood estimate require fewer clones Note that besides the indexing for the time series, to achieve the desired accuracy.the model contains another dimension for the clones.We deﬁne the data set by using the dcdim method for > upfun <- function(x) {cloning the observations. We include an element k = + if (missing(x)) {1 that will be multiplied to indicate how many clones + np <- ncol(X)(columns) are in the data, while n (number of obser- + return(cbind(rep(0, np+1), + rep(0.001, np+1)))vations) remains unchanged: + } else {> paurelia <- c(17, 29, 39, 63, 185, 258, 267, + ncl <- nclones(x)+ 392, 510, 570, 650, 560, 575, 650, 550, + if (is.null(ncl))+ 480, 520, 500) + ncl <- 1> bhdat <- list(Y=dcdim(data.matrix(paurelia)), + par <- coef(x)+ n=length(paurelia), k=1) + se <- dcsd(x)> dcbhdat <- dclone(bhdat, n.clones = 5, + log.sigma <- mcmcapply(x[,"sigma"], log)+ multiply = "k", unchanged = "n") + par[length(par)] <- mean(log.sigma) + se[length(se)] <- sd(log.sigma) * sqrt(ncl)> bhmod <- jags.fit(dcbhdat, + return(cbind(par, se))+ c("lambda","beta","sigma"), beverton.holt, + }+ n.iter=1000) + }> coef(bhmod) Finally, we deﬁne prior speciﬁcations as part of the data ("pr"), and provide the updating function in the beta lambda sigma dc.fit call:0.00218 2.18755 0.12777 > updat <- list(Y = Y, X = X, n = n,Results compare well with estimates in Ponciano + np = ncol(X), pr = upfun()) ˆ ˆ > k <- c(1, 5, 10, 20) ˆet al. (2009) ( β = 0.00235, λ = 2.274, σ = 0.1274). > dcmod <- dc.fit(updat, c("beta", "sigma"), + glmm.model.up, n.clones = k, n.iter = 1000, + multiply = "n", unchanged = "np",Iterative model ﬁtting + update = "pr", updatefun = upfun)We can use the dc.fit function to iteratively ﬁt the > summary(dcmod)same model with various k values as described in Iterations = 1001:2000Lele et al. (2010). The function takes similar ar- Thinning interval = 1guments to dclone and jags.fit (or bugs.fit, if Number of chains = 3flavour = "bugs" is used). Because the informa- Sample size per chain = 1000tion in the data overrides the priors by increasing the Number of clones = 20The R Journal Vol. 2/2, December 2010 ISSN 2073-4859
33.
C ONTRIBUTED R ESEARCH A RTICLES 33 terior distribution, while the two ﬁt measures re-1. Empirical mean and standard deviation for each ﬂect the adequacy of the normal approximation. All variable, plus standard error of the mean: three statistics should converge to zero as k increases. If this happens, different prior speciﬁcations are no Mean SD DC SD Naive SE longer inﬂuencing the results (Lele et al., 2007, 2010).beta[1] 1.894 0.0368 0.164 0.000671 ˆ These measures and multivariate R values forbeta[2] -1.082 0.0734 0.328 0.001341 MCMC chain convergence (Brooks and Gelman,sigma 0.278 0.0256 0.114 0.000467 Time-series SE R hat 1997) are calculated during the iterations by dc.fitbeta[1] 0.00259 1.01 as well, and can be retrieved by the function dcdiag:beta[2] 0.00546 1.01 > dcdiag(dcmod)sigma 0.00194 1.04 n.clones lambda.max ms.error r.squared r.hat2. Quantiles for each variable: 1 1 0.11538 0.1282 0.02103 1.66 2 5 0.02225 0.0229 0.00277 1.02 2.5% 25% 50% 75% 97.5% 3 10 0.01145 0.0383 0.00612 1.01beta[1] 1.823 1.869 1.89 1.920 1.964 4 20 0.00643 0.0241 0.00173 1.03beta[2] -1.230 -1.133 -1.08 -1.029 -0.943 The data cloning algorithm requires that MCMCsigma 0.226 0.260 0.28 0.296 0.323 chains are properly mixed and the posterior distribu-The summary contains data cloning standard errors tion is nearly degenerate multivariate normal. These ˆ(DC SD) and R values for MCMC chain convergence requirements have been satisﬁed in the case of the ˆ Poisson GLMM model. R values show better mix-(Gelman and Rubin, 1992). ing properties of the MCMC chains with higher k values, and in this example it is expected, becauseDiagnostics we have used informative priors near the maximum likelihood estimates for the cases k > 1.We can see how the increase in the number of clones The functions dctable and dcdiag can be used toaffects our inferences on single nodes by using the determine the number of clones required for a par-dctable function. This function retrieves the infor- ticular model and data set. Also, these diagnosticmation stored during the iterative ﬁtting process (or functions can alert the modeller when the model con-can be used to compare more than one ﬁtted model). tains non-identiﬁable parameters. Lele et al. (2010)Only the last MCMC object is returned by dc.fit, gives several examples; here we consider the normal-but descriptive statistics of the posterior distribution normal mixture:are stored in each step (Figure 1). The asymptotic µi ∼ normal(γ, τ 2 )convergence can be visually evaluated by plottingthe posterior variances scaled by the variance for the Yi | µi ∼ normal(µi , σ2 )model at k = 1 (or the smallest k). If scaled vari- where the parameters (γ, σ2 + τ 2 ) are known to beances are decreasing at a 1/k rate and have reached a identiﬁable, but (γ, σ2 , τ 2 ) are not.lower bound (say < 0.05), the data cloning algorithm We simulate random observations under thishas converged. If scaled variances are not decreas- model (γ = 2.5, σ = 0.2, τ = 0.5) and ﬁt the corre-ing at the proper rate, that might indicate identiﬁa- sponding BUGS model:bility issues (Lele et al., 2010). On the log scale, thisgraph should show an approximately linear decrease > gamma <- 2.5 > sigma <- 0.2of log(scaled variance) vs. log(k) for each parameter > tau <- 0.5(Figure 2). > set.seed(2345)> dct <- dctable(dcmod) > mu <- rnorm(n, gamma, tau)> plot(dct) > Y <- rnorm(n, mu, sigma) > nn.model <- function() {> plot(dct, type="log.var") + for (i in 1:n) { + Y[i] ~ dnorm(mu[i], prec1) Lele et al. (2010) introduced diagnostic measures + mu[i] ~ dnorm(gamma, prec2)for checking the convergence of the data cloning al- + } + gamma ~ dnorm(0, 0.001)gorithm which are based on the joint posterior dis- + log.sigma ~ dnorm(0, 0.001)tribution and not only on single parameters. These + sigma <- exp(log.sigma)include calculating the largest eigenvalue of the pos- + prec1 <- 1 / pow(sigma, 2)terior variance covariance matrix (lambdamax.diag), + log.tau ~ dnorm(0, 0.001)or calculating the mean squared error and another + tau <- exp(log.tau)correlation-like ﬁt statistic (r2 ) based on a χ2 approx- + prec2 <- 1 / pow(tau, 2)imation (chisq.diag with a plot method). The max- + }imum eigenvalue reﬂects the degeneracy of the pos- > nndat <- list(Y = Y, n = n)The R Journal Vol. 2/2, December 2010 ISSN 2073-4859
34.
34 C ONTRIBUTED R ESEARCH A RTICLES beta[1] beta[2] sigma 2.2 x x x q R.hat >= 1.1 −0.6 q R.hat < 1.1 2.1 0.4 x x −0.8 x x 2.0 x x x x x 0.3Estimate Estimate Estimate −1.0 q q q 1.9 q q q q q q q q x −1.2 x 0.2 q x x x 1.8 x x x x −1.4 1.7 0.1 −1.6 1.6 x x 0.0 x 1 5 10 20 1 5 10 20 1 5 10 20 Number of clones Number of clones Number of clonesFigure 1: Summary statistics for the Poisson mixed model example. Means are converging towards the maxi-mum likelihood estimates (points), standard errors (vertical lines) are getting shorter with increasing numberof clones (95 and 50% quantile ranges and median also depicted). beta[1] beta[2] sigma 0.0 q 0.0 q 0.0 q q R.hat >= 1.1 −0.5 q R.hat < 1.1 −0.5 −0.5log(Scaled Variance) log(Scaled Variance) log(Scaled Variance) −1.0 −1.0 −1.0 −1.5 −1.5 −1.5 q q q −2.0 −2.0 −2.0 −2.5 q q −3.0 q −2.5 −2.5 q q −3.5 q −3.0 1 5 10 20 1 5 10 20 1 5 10 20 Number of clones Number of clones Number of clonesFigure 2: Convergence diagnostics for data cloning based on the Poisson mixed model example. Log of ScaledVariances should decrease linearly with log(k ), the scaled variance value close to zero (< 0.05) indicates con-vergence of the data cloning algorithm.> nnmod <- dc.fit(nndat, c("gamma","sigma","tau"), The high r.hat and the variable lambda.max and ﬁt+ nn.model, n.clones=c(1,10,20,30,40,50), statistic values that are not converging to zero indi-+ n.iter=1000, multiply="n") cate possible problems with identiﬁability.> dcdiag(nnmod) n.clones lambda.max ms.error r.squared r.hat1 1 0.0312 0.508 0.02985 1.18 Inference and prediction2 10 0.0364 0.275 0.00355 2.063 20 1.2617 1.111 0.13714 50.15 We can explore the results with methods deﬁned for4 30 0.1530 0.753 0.10267 12.91 "mcmc.list" objects (many such methods are avail-5 40 1.7972 0.232 0.03770 92.87 able in the coda package, e.g. summary, plot, etc.).6 50 1.8634 0.241 0.04003 15.72 The dclone package adds a few more methods: coef returns the mean of the posterior, dcsd the data> vars <- mcmcapply(nnmod[,c("sigma","tau")], cloning standard errors. Any function returning a+ array)^2> sigma^2 + tau^2 scalar statistic can be passed via the mcmcapply func- tion:[1] 0.29 > coef(dcmod)> summary(rowSums(vars)) Min. 1st Qu. Median Mean 3rd Qu. Max. beta[1] beta[2] sigma 0.21 0.23 2.87 3.00 6.04 6.84 1.894 -1.082 0.278The R Journal Vol. 2/2, December 2010 ISSN 2073-4859
35.
C ONTRIBUTED R ESEARCH A RTICLES 35> dcsd(dcmod) > prec <- make.symmetric(solve(vcov(dcmod))) > prdat <- list(X = X, n = nrow(X), np = ncol(X),beta[1] beta[2] sigma + param = coef(dcmod), prec = prec) 0.164 0.328 0.114 > prmod <- jags.fit(prdat, "lambda", glmm.pred, + n.iter = 1000)> mcmcapply(dcmod, sd) * sqrt(nclones(dcmod))beta[1] beta[2] sigma 0.164 0.328 0.114 Writing high level functions The asymptotic multivariate normality can be Suppose we want to provide a user friendly func-used to get Wald-type conﬁdence intervals for the tion to ﬁt the Poisson mixed model with random in-estimates based on the inverse of the Fisher infor- tercept. We are now modeling the observed abun-mation matrix. The vcov method returns the inverse dances (count based on point counts) of the Oven-Fisher information matrix, the confint method cal- bird (Seiurus aurocapilla) as a function of ecologicalculates conﬁdence intervals assuming multivariate site characteristics (upland/lowland, uplow) and per-normality for MCMC objects with k > 1: centage of total human disturbance around the sites (thd in the ovenbird data set). Data were collected> confint(dcmod) from 182 sites in the Northern Boreal region of Al- 2.5 % 97.5 % berta, Canada, between 2003 and 2008. Data werebeta[1] 1.5718 2.217 collected by the Alberta Biodiversity Monitoring In-beta[2] -1.7253 -0.438 stitute and are available at http://www.abmi.ca.sigma 0.0534 0.502 Our goal is to determine the effect of human dis- turbance on Ovenbird abundance, by controlling for> vcov(dcmod) site characteristics. But we know that other factors beta[1] beta[2] sigma not taken into account, e.g. the amount of deciduousbeta[1] 0.02705 -0.04604 -0.00291 forest, might inﬂuence the abundance as well (Hob-beta[2] -0.04604 0.10783 -0.00156 son and Bayne, 2002). So the random intercept willsigma -0.00291 -0.00156 0.01308 account for this unexplained environmental variabil- ity. The Poisson error component will account forConﬁdence intervals can also be obtained via para- random deviations from expected abundances (λi )metric bootstrap or based on proﬁle likelihood (Pon- and observed counts (Yi ) represent a realization ofciano et al., 2009), but these are not currently avail- this quantity.able in the dclone package and often require substan- Here is the high level function for ﬁtting the Pois-tial user intervention. son mixed model built on data cloning with a simple These methods are handy when we make predic- print, summary and predict method:tions. We can use the maximum likelihood estimatesand the variance-covariance matrix deﬁned as a mul- > glmmPois <- function(formula,tivariate normal node in the BUGS model. For the + data = parent.frame(), n.clones, ...) {Poisson mixed model example, the BUGS model for + lhs <- formula[[2]]prediction will look like: + Y <- eval(lhs, data) + formula[[2]] <- NULL> glmm.pred <- function() { + rhs <- model.frame(formula, data)+ for (i in 1:n) { + X <- model.matrix(attr(rhs, "terms"), rhs)+ Y[i] ~ dpois(lambda[i]) + dat <- list(n = length(Y), Y = Y,+ lambda[i] <- exp(mu[i]) + X = X, np = ncol(X))+ mu[i] <- alpha[i] + + dcdat <- dclone(dat, n.clones,+ inprod(X[i,], beta[1,]) + multiply = "n", unchanged = "np")+ alpha[i] ~ dnorm(0, tau) + mod <- jags.fit(dcdat, c("beta", "sigma"),+ } + glmm.model, ...)+ tmp[1:(np+1)] ~ dmnorm(param[], prec[,]) + coefs <- coef(mod)+ beta[1,1:np] <- tmp[1:np] + names(coefs) <- c(colnames(X),+ sigma <- tmp[(np+1)] + "sigma")+ tau <- 1 / pow(sigma, 2) + rval <- list(coefficients = coefs,+ } + call = match.call(), + mcmc = mod, y = Y, x = rhs,Now we add the estimates and the precision ma- + model = X, formula = formula)trix prec to the data (the make.symmetric function + class(rval) <- "glmmPois"prevents some problems related to matrix symmetry + rvaland numerical precision), and deﬁne X for the pre- + }dictions (now we simply use the observed values of > print.glmmPois <- function(x, ...) {the covariates). Then do the modeling as usual by + cat("glmmPois modelnn")sampling the node "lambda": + print(format(coef(x), digits = 4),The R Journal Vol. 2/2, December 2010 ISSN 2073-4859
36.
36 C ONTRIBUTED R ESEARCH A RTICLES+ print.gap = 2, quote = FALSE) glmmPois model+ cat("n")+ invisible(x) (Intercept) uplowlowland thd+ } 2.00312 -1.34242 -0.01647> summary.glmmPois <- function(object, ...) { sigma+ x <- cbind("Estimate" = coef(object), 1.19318+ "Std. Error" = dcsd(object$mcmc),+ confint(object$mcmc)) > summary(obmod)+ cat("Call:", deparse(object$call,+ width.cutoff = getOption("width")), Call:+ "n", sep="n") glmmPois(formula = count ~ uplow + thd, data = ovenbird,+ cat("glmmPois modelnn") n.clones = 5, n.update = 1000, n.iter = 1000)+ printCoefmat(x, ...)+ cat("n")+ invisible(x) glmmPois model+ }> predict.glmmPois <- function(object, Estimate Std. Error 2.5 % 97.5 %+ newdata = NULL, type = c("mu", "lambda", "Y"), (Intercept) 2.00312 0.13767 1.73328 2.27+ level = 0.95, ...){ uplowlowland -1.34242 0.21503 -1.76387 -0.92+ prec <- solve(vcov(object$mcmc)) thd -0.01647 0.00569 -0.02763 -0.01+ prec <- make.symmetric(prec) sigma 1.19318 0.09523 1.00653 1.38+ param <- coef(object)+ if (is.null(newdata)) { Finally predict abundances as a function of distur-+ X <- object$model bance (0–100%) by controlling for site characteristics+ } else { (Figure 3):+ rhs <- model.frame(object$formula, newdata) > thd <- seq(0, 100, len = 101)+ X <- model.matrix(attr(rhs, "terms"), rhs) > ndata <- data.frame(uplow = rep("lowland",+ } + length(thd)), thd = thd)+ type <- match.arg(type) > levels(ndata$uplow) <- levels(ovenbird$uplow)+ prdat <- list(n = nrow(X), X = X, > obpred <- predict(obmod, ndata, "lambda")+ np = ncol(X), param = param, prec = prec)+ prval <- jags.fit(prdat, type, glmm.pred, ...)+ a <- (1 - level)/2+ a <- c(a, 1 - a)+ rval <- list(fit = coef(prval), 80+ ci.fit = quantile(prval, probs = a))+ rval+ } 60Note that the functions glmm.model and glmm.predcontaining the BUGS code are used within these R lambda 40functions. This implementation works ﬁne, but is qnot adequate when building a contributed R pack- q q qqage, because functions such as dnorm and inprod are q q q qq qqq q q qq 20 qnot valid R objects, etc. For R packages, the best way qq qq qq qq q q q q q qq qqq q qq qis to represent the BUGS model as a character vector q qqqq q qq q q qq q q q q qqq q q q q q qq q q q q q q q q q q qwith lines as elements, and put that inside the R func- qq q q q qqq q q q q q qq q q q q q q q q qq q q qq q q q q qq q qqq q q q qq qq q q qqqqq qqqq q q q qq q qq q q q q q q q q 0 q qq q q q q q q q q qq q qtion. The custommodel function of the dclone pack-age can be used to create such character vectors and 0 20 40 60 80 100pass them to other dclone functions via the model ar- thdgument. Now we ﬁt the model for the ovenbird data set toestimate the effect of human disturbance on Oven-bird abundance. We ﬁt the model using the function Figure 3: Expected Ovenbird abundance (lambda) asglmmPois: the function of percentage human disturbance (thd) based on the Poisson mixed model. Line represents> data(ovenbird) the mean, gray shading indicates 95% prediction in-> obmod <- glmmPois(count ~ uplow + thd, tervals. Points are observations.+ ovenbird, n.clones = 5, n.update = 1000,+ n.iter = 1000) Ovenbird abundance was signiﬁcantly higher in up-Then print the object and inspect the summary, land sites, and human disturbance had a signiﬁcant negative effect on expected Ovenbird abundance.> obmod Unexplained variation (σ2 =1.425 ± 0.102 SE) wasThe R Journal Vol. 2/2, December 2010 ISSN 2073-4859
37.
C ONTRIBUTED R ESEARCH A RTICLES 37substantial, thus the choice of the Poisson mixed S. R. Lele, B. Dennis, and F. Lutscher. Data cloning:model makes sense for this data set. easy maximum likelihood estimation for complex ecological models using Bayesian Markov chain Monte Carlo methods. Ecology Letters, 10:551–563,Summary 2007.The data cloning algorithm is especially useful for S. R. Lele, K. Nadeem, and B. Schmuland. Estima-complex models for which other likelihood based bility and likelihood inference for generalized lin-computational methods fail. The algorithm also can ear mixed models using data cloning. Journal of thenumerically reveal potential identiﬁability issues re- American Statistical Association, 2010. in press.lated to hierarchical models. The dclone packagesupports established MCMC software and provides D. Lunn, D. Spiegelhalter, A. Thomas, and N. Best.low level functions to help implementing high level The BUGS project: Evolution, critique and fu-estimating procedures to get maximum likelihood ture directions. Statistics in Medicine, 28:3049–3067,inferences and predictions for more specialized prob- 2009. with discussion.lems based on the data cloning algorithm. M. Plummer. JAGS Version 2.0.0 manual (April 26, 2010), 2010a. URL http://mcmc-jags.Acknowledgements sourceforge.net.Subhash Lele, Khurram Nadeem and Gabor M. Plummer. rjags: Bayesian graphical models us-Grothendieck have provided invaluable suggestions ing MCMC, 2010b. URL http://mcmc-jags.and feedback on this work. Comments of Martyn sourceforge.net. R package version 2.0.0-2.Plummer and two anonymous reviewers greatly M. Plummer, N. Best, K. Cowles, and K. Vines.improved the quality of the paper. Funding was coda: Output analysis and diagnostics for MCMC,provided by the Alberta Biodiversity Monitoring In- 2010. URL http://cran.r-project.org/web/stitute and the Natural Sciences and Engineering packages/coda/index.html. R package versionResearch Council. 0.13-5.Péter Sólymos J. M. Ponciano, M. L. Taper, B. Dennis, and S. R. Lele.Alberta Biodiversity Monitoring Institute Hierarchical models in ecology: conﬁdence inter-Department of Biological Sciences vals, hypothesis testing, and model selection usingUniversity of Alberta data cloning. Ecology, 90:356–362, 2009.solymos@ualberta.ca P. Sólymos. dclone: Data Cloning and MCMC Tools for Maximum Likelihood Methods, 2010. URL http:Bibliography //cran.r-project.org/packages=dclone. R pack- age version 1.2-0.S. P. Brooks and A. Gelman. General methods for monitoring convergence of iterative simulations. D. Spiegelhalter, A. Thomas, N. Best, and D. Lunn. Journal of Computational and Graphical Statistics, 7: OpenBUGS User Manual, Version 3.0.2, September 434–455, 1997. 2007, 2007. URL http://mathstat.helsinki.fi/S. P. Brooks and B. J. T. Morgan. Optimization using openbugs/. simulated annealing. Statistician, 241–257:44, 1995. D. J. Spiegelhalter, A. Thomas, N. G. Best, andG. F. Gause. The struggle for existence. Wilkins, Balti- D. Lunn. WinBUGS version 1.4 users manual. more, Maryland, USA, 1934. MRC Biostatistics Unit, Cambridge, 2003. URL http://www.mrc-bsu.cam.ac.uk/bugs/.A. Gelman and D. B. Rubin. Inference from itera- tive simulation using multiple sequences. Statisti- D. O. Stram and J. W. Lee. Variance components test- cal Science, 7:457–511, 1992. ing in the longitudinal mixed effects model. Bio-A. Gelman, J. Carlin, H. Stern, and D. Rubin. Bayesian metrics, 50:1171–1177, 1994. Data Analysis. CRC Press, Boca Raton, 2 edition, 2003. S. Sturtz, U. Ligges, and A. Gelman. R2WinBUGS: A package for running WinBUGS from R. Jour-W. Gilks, S. Richardson, and D. Spiegelhalter. Markov nal of Statistical Software, 12(3):1–16, 2005. URL Chain Monte Carlo in Practice. Chapman & Hall, http://www.jstatsoft.org. London, 1996. A. Thomas, B. O’Hara, U. Ligges, and S. Sturtz. Mak-K. A. Hobson and E. M. Bayne. Breeding bird com- ing BUGS open. R News, 6(1):12–17, 2006. URL munities in boreal forest of western Canada: Con- http://cran.r-project.org/doc/Rnews/. sequences of “unmixing” the mixedwoods. Con- dor, 102:759–769, 2002.The R Journal Vol. 2/2, December 2010 ISSN 2073-4859
38.
38 C ONTRIBUTED R ESEARCH A RTICLESstringr: modern, consistent stringprocessingby Hadley Wickham Basic string operations Abstract String processing is not glamorous, but There are three string functions that are closely re- it is frequently used in data cleaning and prepa- lated to their base R equivalents, but with a few en- ration. The existing string functions in R are hancements: powerful, but not friendly. To remedy this, the stringr package provides string functions that • str_c is equivalent to paste, but it uses the are simpler and more consistent, and also ﬁxes empty string (“”) as the default separator and some functionality that R is missing compared silently removes zero length arguments. to other programming languages. • str_length is equivalent to nchar, but it pre- serves NA’s (rather than giving them lengthIntroduction 2) and converts factors to characters (not inte-Strings are not glamorous, high-proﬁle components gers).of R, but they do play a big role in many data clean- • str_sub is equivalent to substr but it returns aing and preparations tasks. R provides a solid set of zero length vector if any of its inputs are zerostring operations, but because they have grown or- length, and otherwise expands each argumentganically over time, they can be inconsistent and a to match the longest. It also accepts negativelittle hard to learn. Additionally, they lag behind the positions, which are calculated from the left ofstring operations in other programming languages, the last character. The end position defaults toso that some things that are easy to do in languages -1, which corresponds to the last character.like Ruby or Python are rather hard to do in R.The stringr package aims to remedy these problems • str_str<- is equivalent to substr<-, but likeby providing a clean, modern interface to common str_sub it understands negative indices, andstring operations. replacement strings not do need to be the same More concretely, stringr: length as the string they are replacing. • Processes factors and characters in the same way. Three functions add new functionality: • Gives functions consistent names and argu- ments. • str_dup to duplicate the characters within a string. • Simpliﬁes string operations by eliminating op- tions that you don’t need 95% of the time (the • str_trim to remove leading and trailing other 5% of the time you can use the base func- whitespace. tions). • str_pad to pad a string with extra whitespace • Produces outputs than can easily be used as in- on the left, right, or both sides. puts. This includes ensuring that missing in- puts result in missing outputs, and zero length inputs result in zero length outputs. Pattern matching • Completes R’s string handling functions with useful functions from other programming lan- stringr provides pattern matching functions to de- guages. tect, locate, extract, match, replace, and split strings: To meet these goals, stringr provides two basicfamilies of functions: • str_detect detects the presence or absence of a pattern and returns a logical vector. Based on • basic string operations, and grepl. • pattern matching functions which use regular • str_locate locates the ﬁrst position of a expressions to detect, locate, match, replace, pattern and returns a numeric matrix with extract, and split strings. columns start and end. str_locate_all locates These are described in more detail in the follow- all matches, returning a list of numeric matri-ing sections. ces. Based on regexpr and gregexpr.The R Journal Vol. 2/2, December 2010 ISSN 2073-4859
39.
C ONTRIBUTED R ESEARCH A RTICLES 39 • str_extract extracts text corresponding to • A good reference sheet1 the ﬁrst match, returning a character vector. str_extract_all extracts all matches and re- • A tool that allows you to interactively test2 turns a list of character vectors. what a regular expression will match • str_match extracts capture groups formed by • A tool to build a regular expression3 from an () from the ﬁrst match. It returns a char- input string acter matrix with one column for the com- plete match and one column for each group. When writing regular expressions, I strongly rec- str_match_all extracts capture groups from all ommend generating a list of positive (pattern should matches and returns a list of character matrices. match) and negative (pattern shouldn’t match) test cases to ensure that you are matching the correct • str_replace replaces the ﬁrst matched components. pattern and returns a character vector. str_replace_all replaces all matches. Based on sub and gsub. Functions that return lists • str_split_fixed splits the string into a ﬁxed Many of the functions return a list of vectors or ma- number of pieces based on a pattern and re- trices. To work with each element of the list there turns a character matrix. str_split splits a are two strategies: iterate through a common set of string into a variable number of pieces and re- indices, or use mapply to iterate through the vectors turns a list of character vectors. simultaneously. The ﬁrst approach is usually easier to understand and is illustrated in Figure 2.Figure 1 shows how the simple (single match) formof each of these functions work. ConclusionArguments stringr provides an opinionated interface to stringsEach pattern matching function has the same ﬁrst in R. It makes string processing simpler by remov-two arguments, a character vector of strings to pro- ing uncommon options, and by vigorously enforcingcess and a single pattern (regular expression) to consistency across functions. I have also added newmatch. The replace functions have an additional ar- functions that I have found useful from Ruby, andgument specifying the replacement string, and the over time, I hope users will suggest useful functionssplit functions have an argument to specify the num- from other programming languages. I will continueber of pieces. to build on the included test suite to ensure that the Unlike base string functions, stringr only offers package behaves as expected and remains bug free.limited control over the type of matching. Thefixed() and ignore.case() functions modify thepattern to use ﬁxed matching or to ignore case, but Bibliographyif you want to use perl-style regular expressions orto match on bytes instead of characters, you’re out J. E. Friedl. Mastering Regular Expressions. O’Reilly,of luck and you’ll have to use the base string func- 1997. URL http://oreilly.com/catalog/tions. This is a deliberate choice made to simplify 9781565922570.these functions. For example, while grepl has six ar-guments, str_detect only has two. Hadley WickhamRegular expressions Department of Statistics Rice UniversityTo be able to use these functions effectively, you’ll 6100 Main St MS#138need a good knowledge of regular expressions Houston TX 77005-1827(Friedl, 1997), which this paper is not going to teach USAyou. Some useful tools to get you started: hadley@rice.edu 1 http://www.regular-expressions.info/reference.html 2 http://gskinner.com/RegExr/ 3 http://www.txt2re.comThe R Journal Vol. 2/2, December 2010 ISSN 2073-4859
40.
40 C ONTRIBUTED R ESEARCH A RTICLESlibrary(stringr)strings <- c(" 219 733 8965", "329-293-8753 ", "banana", "595 794 7569", "387 287 6718", "apple", "233.398.9187 ", "482 952 3315", "239 923 8115", "842 566 4692", "Work: 579-499-7527", "$1000", "Home: 543.355.3679")phone <- "([2-9][0-9]{2})[- .]([0-9]{3})[- .]([0-9]{4})"# Which strings contain phone numbers?str_detect(strings, phone)strings[str_detect(strings, phone)]# Where in the string is the phone number located?loc <- str_locate(strings, phone)loc# Extract just the phone numbersstr_sub(strings, loc[, "start"], loc[, "end"])# Or more conveniently:str_extract(strings, phone)# Pull out the three components of the matchstr_match(strings, phone)# Anonymise the datastr_replace(strings, phone, "XXX-XXX-XXXX")Figure 1: Simple string matching functions for processing a character vector containing phone numbers(among other things).library(stringr)col2hex <- function(col) { rgb <- col2rgb(col) rgb(rgb["red", ], rgb["green", ], rgb["blue", ], max = 255)}# Goal replace colour names in a string with their hex equivalentstrings <- c("Roses are red, violets are blue", "My favourite colour is green")colours <- str_c("b", colors(), "b", collapse="|")# This gets us the colours, but we have no way of replacing themstr_extract_all(strings, colours)# Instead, lets work with locationslocs <- str_locate_all(strings, colours)sapply(seq_along(strings), function(i) { string <- strings[i] loc <- locs[[i]] # Convert colours to hex and replace hex <- col2hex(str_sub(string, loc[, "start"], loc[, "end"])) str_sub(string, loc[, "start"], loc[, "end"]) <- hex string})Figure 2: A more complex situation involving iteration through a string and processing matches with a func-tion.The R Journal Vol. 2/2, December 2010 ISSN 2073-4859
41.
C ONTRIBUTED R ESEARCH A RTICLES 41Bayesian Estimation of the GARCH(1,1)Model with Student-t Innovationsby David Ardia and Lennart F. Hoogerheide (Trapletti and Hornik, 2009). The Bayesian approach offers an attractive alternative which enables small Abstract This note presents the R package sample results, robust estimation, model discrimi- bayesGARCH which provides functions for the nation, model combination, and probabilistic state- Bayesian estimation of the parsimonious and ef- ments on (possibly nonlinear) functions of the model fective GARCH(1,1) model with Student-t inno- parameters. vations. The estimation procedure is fully auto- The package bayesGARCH (Ardia, 2007) imple- matic and thus avoids the tedious task of tuning ments the Bayesian estimation procedure described an MCMC sampling algorithm. The usage of the in Ardia (2008, chapter 5) for the GARCH(1,1) model package is shown in an empirical application to with Student-t innovations. The approach, based exchange rate log-returns. on the work of Nakatsuma (1998), consists of a Metropolis-Hastings (MH) algorithm where the pro- posal distributions are constructed from auxiliaryIntroduction ARMA processes on the squared observations. This methodology avoids the time-consuming and difﬁ-Research on changing volatility using time series cult task, especially for non-experts, of choosing andmodels has been active since the pioneer paper by tuning a sampling algorithm. The program is writ-Engle (1982). From there, ARCH (AutoRegressive ten in R with some subroutines implemented in C inConditional Heteroscedasticity) and GARCH (Gen- order to speed up the simulation procedure. The va-eralized ARCH) type models grew rapidly into a rich lidity of the algorithm as well as the correctness offamily of empirical models for volatility forecasting the computer code have been veriﬁed by the methodduring the 80’s. These models are widespread and of Geweke (2004).essential tools in ﬁnancial econometrics. In the GARCH( p, q) model introduced by Boller-slev (1986), the conditional variance at time t of the Model, priors and MCMC schemelog-return yt (of a ﬁnancial asset or a ﬁnancial index),denoted by ht , is postulated to be a linear function of A GARCH(1,1) model with Student-t innovations forthe squares of past q log-returns and past p condi- the log-returns {yt } may be written via data aug-tional variances. More precisely: mentation (see Geweke, 1993) as q p 1/2 . ν −2 ht = α0 + ∑ αi y2−i + t ∑ β j ht− j , yt = ε t ν t ht t = 1, . . . , T i =1 j =1 iid ε t ∼ N (0, 1) ν ν (1)where the parameters satisfy the constraints αi ≥ 0 iid t ∼ IG ,(i = 0, . . . , q) and β j ≥ 0 ( j = 1, . . . , p) in order to en- 2 2 .sure a positive conditional variance. In most empir- ht = α0 + α1 y2−1 + βht−1 , tical applications it turns out that the simple speci-ﬁcation p = q = 1 is able to reproduce the volatil- where α0 > 0, α1 , β ≥ 0 and ν > 2; N (0, 1) denotesity dynamics of ﬁnancial data. This has led the the standard normal distribution; I G denotes the in-GARCH(1,1) model to become the workhorse model verted gamma distribution. The restriction on theby both academics and practitioners. Given a model degrees of freedom parameter ν ensures the condi-speciﬁcation for ht , the log-returns are then modelled tional variance to be ﬁnite and the restrictions on theas yt = ε t h1/2 , where ε t are i.i.d. disturbances. Com- GARCH parameters α0 , α1 and β guarantee its posi- tmon choices for ε t are Normal and Student-t distur- tivity. We emphasize the fact that only positivity con-bances. The Student-t speciﬁcation is particularly straints are implemented in the MH algorithm; nouseful, since it can provide the excess kurtosis in the stationarity conditions are imposed in the simulationconditional distribution that is often found in ﬁnan- procedure.cial time series processes (unlike models with Nor- In order to write the likelihood function, we de- . .mal innovations). ﬁne the vectors y = (y1 , . . . , y T ) , = ( 1, . . . , T ) . Until recently, GARCH models have mainly been and α = (α0 , α1 ) . We regroup the model parameters .estimated using the classical Maximum Likelihood into the vector ψ = (α, β, ν). Then, upon deﬁning thetechnique. Several R packages provide functions T × T diagonal matrixfor their estimation; see, e.g. fGarch (Wuertz and . ν −2 TChalabi, 2009), rgarch (Ghalanos, 2010) and tseries Σ = Σ(ψ, ) = diag { t ν ht ( α, β )}t=1 ,The R Journal Vol. 2/2, December 2010 ISSN 2073-4859
42.
42 C ONTRIBUTED R ESEARCH A RTICLES .where ht (α, β) = α0 + α1 y2−1 + βht−1 (α, β), we can t away from two to avoid explosion of the conditionalexpress the likelihood of (ψ, ) as variance. Second, we can approximate the normal- ity of the errors while maintaining a reasonably tight L(ψ, | y) ∝ (det Σ)−1/2 exp − 1 y Σ−1 y . 2 (2) prior which can improve the convergence of the sam- pler.The Bayesian approach considers (ψ, ) as a random The joint prior distribution is then formed by as-variable which is characterized by a prior density de- suming prior independence between the parameters,noted by p(ψ, ). The prior is speciﬁed with the help i.e. p(ψ, ) = p(α) p( β) p( | ν) p(ν).of parameters called hyperparameters which are ini- The recursive nature of the GARCH(1,1) variancetially assumed to be known and constant. Moreover, equation implies that the joint posterior and the fulldepending on the researcher’s prior information, this conditional densities cannot be expressed in closeddensity can be more or less informative. Then, by form. There exists no (conjugate) prior that can rem-coupling the likelihood function of the model param- edy this property. Therefore, we cannot use the sim-eters with the prior density, we can transform the ple Gibbs sampler and need to rely on a more elab-probability density using Bayes’ rule to get the pos- orated Markov Chain Monte Carlo (MCMC) simu-terior density p (ψ, | y) as follows: lation strategy to approximate the posterior density. L (ψ, | y) p (ψ, ) The idea of MCMC sampling was ﬁrst introduced by p (ψ, | y) = . (3) Metropolis et al. (1953) and was subsequently gen- L (ψ, | y) p (ψ, ) dψd eralized by Hastings (1970). The sampling strategyThis posterior is a quantitative, probabilistic descrip- relies on the construction of a Markov chain with re-tion of the knowledge about the model parameters alizations (ψ[0] , [0] ), . . . , (ψ[ j] , [ j] ), . . . in the parame-after observing the data. For an excellent introduc- ter space. Under appropriate regularity conditions,tion on Bayesian econometrics we refer the reader to asymptotic results guarantee that as j tends to inﬁn-Koop (2003). ity, (ψ[ j] , [ j] ) tends in distribution to a random vari- We use truncated normal priors on the GARCH able whose density is (3). Hence, after discarding aparameters α and β burn-in of the ﬁrst draws, the realized values of the chain can be used to make inference about the joint p (α) ∝ φN2 (α | µα , Σα ) 1 α ∈ R2 + posterior. p ( β) ∝ φN1 β | µ β , Σ β 1 { β ∈ R+ } , The MCMC sampler implemented in the pack- age bayesGARCH is based on the approach of Ardiawhere µ• and Σ• are the hyperparameters, 1{·} is the (2008, chapter 5), inspired from the previous work byindicator function and φNd is the d-dimensional nor- Nakatsuma (1998). The algorithm consists of a MHmal density. algorithm where the GARCH parameters are up- The prior distribution of vector conditional on dated by blocks (one block for α and one block for β)ν is found by noting that the components t are in- while the degrees of freedom parameter is sampleddependent and identically distributed from the in- using an optimized rejection technique from a trans-verted gamma density, which yields lated exponential source density. This methodology has the advantage of being fully automatic. More- − ν −1 ν Tν ν −T T 2 over, in our experience, the algorithm explores the ∏ 2 p( | ν) = Γ t domain of the joint posterior efﬁciently compared to 2 2 t =1 naive MH approaches or the Griddy-Gibbs sampler 1 T ν of Ritter and Tanner (1992). 2 t∑ t × exp − . =1We follow Deschamps (2006) in the choice of theprior distribution on the degrees of freedom parame- Illustrationter. The distribution is a translated exponential withparameters λ > 0 and δ ≥ 2 We apply our Bayesian estimation methods to daily p (ν) = λ exp [−λ (ν − δ)] 1 {ν > δ} . observations of the Deutschmark vs British Pound (DEM/GBP) foreign exchange log-returns. The sam-For large values of λ, the mass of the prior is con- ple period is from January 3, 1985, to December 31,centrated in the neighborhood of δ and a constraint 1991, for a total of 1 974 observations. This data seton the degrees of freedom can be imposed in this has been promoted as an informal benchmark formanner. Normality of the errors is assumed when GARCH time series software validation. From thisδ is chosen large. As pointed out by Deschamps time series, the ﬁrst 750 observations are used to(2006), this prior density is useful for two reasons. illustrate the Bayesian approach. The observationFirst, it is potentially important, for numerical rea- window excerpt from our data set is plotted in Fig-sons, to bound the degrees of freedom parameter ure 1.The R Journal Vol. 2/2, December 2010 ISSN 2073-4859
43.
C ONTRIBUTED R ESEARCH A RTICLES 43 As a prior distribution for the Bayesian estima- tion we take the default values in bayesGARCH, which are diffuse priors. We generate two chains for 5 000 2 passes each by setting the control parameter values n.chain = 2 and l.chain = 5000.DEM/GBP daily log−returns (in percent) 1 > data(dem2gbp) > y <- dem2gbp[1:750] > set.seed(1234) 0 > MCMC <- bayesGARCH(y, control = list( l.chain = 5000, n.chain = 2)) chain: 1 iteration: 10 −1 parameters: 0.0441 0.212 0.656 115 chain: 1 iteration: 20 parameters: 0.0346 0.136 0.747 136 −2 ... chain: 2 iteration: 5000 0 200 400 600 parameters: 0.0288 0.190 0.754 4.67 Time index The function outputs the MCMC chains as an ob-Figure 1: DEM/GBP foreign exchange daily log- ject of the class "mcmc" from the package coda (Plum-returns. mer et al., 2010). This package contains functions for post-processing the MCMC output; see Plummer We ﬁt the GARCH(1,1) model with Student-t in- et al. (2006) for an introduction. Note that coda isnovations to the data for this observation window loaded automatically with bayesGARCH.using the bayesGARCH function A trace plot of the MCMC chains (i.e. a plot of iterations vs. sampled values) can be generated us-> args(bayesGARCH) ing the function traceplot; the output is displayed in Figure 2.function (y, mu.alpha = c(0, 0), Convergence of the sampler (using the diagnostic Sigma.alpha = 1000 * diag(1,2), test of Gelman and Rubin (1992)), acceptance rates mu.beta = 0, Sigma.beta = 1000, and autocorrelations in the chains can be computed lambda = 0.01, delta = 2, as follows: control = list()) > gelman.diag(MCMC) The input arguments of the function are the vec-tor of data, the hyperparameters and the list control Point est. 97.5% quantilewhich can supply any of the following elements: alpha0 1.02 1.07 alpha1 1.01 1.05 • n.chain: number of MCMC chain(s) to be gen- beta 1.02 1.07 erated; default 1. nu 1.02 1.06 • l.chain: length of each MCMC chain; default Multivariate psrf 10000. 1.02 • start.val: vector of starting values of the chain(s); default c(0.01,0.1,0.7,20). Alter- > 1 - rejectionRate(MCMC) natively, the starting values could be set to the alpha0 alpha1 beta nu maximum likelihood estimates using the func- 0.890 0.890 0.953 1.000 tion fGarch available in the package fGarch, for instance. > autocorr.diag(MCMC) • addPriorConditions: function which allows alpha0 alpha1 beta nu the user to add any constraint on the model pa- Lag 0 1.000 1.000 1.000 1.000 rameters; default NULL, i.e. not additional con- Lag 1 0.914 0.872 0.975 0.984 straints are imposed. Lag 5 0.786 0.719 0.901 0.925 Lag 10 0.708 0.644 0.816 0.863 • refresh: frequency of reports; default 10. Lag 50 0.304 0.299 0.333 0.558 • digits: number of printed digits in the reports; The convergence diagnostic shows no evidence default 4. against convergence for the last 2 500 iterations (onlyThe R Journal Vol. 2/2, December 2010 ISSN 2073-4859
44.
44 C ONTRIBUTED R ESEARCH A RTICLES Trace of alpha0 Trace of alpha1 0.7 0.12 0.6 0.5 0.08 0.4 0.3 0.04 0.2 0.1 0.00 0 1000 2000 3000 4000 5000 0 1000 2000 3000 4000 5000 Iterations Iterations Trace of beta Trace of nu 0.9 150 0.8 0.7 100 0.6 0.5 50 0.4 0 0 1000 2000 3000 4000 5000 0 1000 2000 3000 4000 5000 Iterations IterationsFigure 2: Trace plot of the two MCMC chains (in black and gray) for the four model parameters generated bythe MH algorithm.the second half of the chain is used by default > summary(smpl)in gelman.diag) since the scale reduction factor issmaller than 1.2; see Gelman and Rubin (1992) for Iterations = 1:2500details. The MCMC sampling algorithm reaches very Thinning interval = 1high acceptance rates ranging from 89% for vector α Number of chains = 1 Sample size per chain = 2500to 95% for β suggesting that the proposal distribu-tions are close to the full conditionals. The rejection 1. Empirical mean and standard deviationtechnique used to generate ν allows a new value to for each variable, plus standard errorbe drawn at each pass in the MH algorithm. of the mean: The one-lag autocorrelations in the chains rangefrom 0.87 for parameter α1 to 0.98 for parameter ν. Mean SD Naive SE Time-series SEUsing the function formSmpl, we discard the ﬁrst alpha0 0.0345 0.0138 0.000277 0.001732 500 draws from the overall MCMC output as a burn alpha1 0.2360 0.0647 0.001293 0.00760in period, keep only every second draw to diminish beta 0.6832 0.0835 0.001671 0.01156the autocorrelation, and merge the two chains to get nu 6.4019 1.5166 0.030333 0.19833a ﬁnal sample length of 2 500. 2. Quantiles for each variable:> smpl <- formSmpl(MCMC, l.bi = 2500, batch.size = 2) 2.5% 25% 50% 75% 97.5% alpha0 0.0126 0.024 0.0328 0.0435 0.0646n.chain : 2 alpha1 0.1257 0.189 0.2306 0.2764 0.3826l.chain : 5000 beta 0.5203 0.624 0.6866 0.7459 0.8343l.bi : 2500 nu 4.2403 5.297 6.1014 7.2282 10.1204batch.size: 2smpl size : 2500 The marginal distributions of the model param- Basic posterior statistics can be easily obtained eters can be obtained by ﬁrst transforming the out-with the summary method available for mcmc objects. put into a matrix and then using the function hist.The R Journal Vol. 2/2, December 2010 ISSN 2073-4859
45.
C ONTRIBUTED R ESEARCH A RTICLES 45Marginal posterior densities are displayed in Fig- α1 + βure 3. We clearly notice the asymmetric shape of thehistograms; this is especially true for parameter ν. 400This is also reﬂected by the differences between theposterior means and medians. These results shouldwarn us against the abusive use of asymptotic justiﬁ- 300cations. In the present case, even 750 observationsdo not sufﬁce to justify the asymptotic symmetricnormal approximation for the parameter estimator’s 200distribution. Probabilistic statements on nonlinear functions ofthe model parameters can be straightforwardly ob- 100tained by simulation from the joint posterior sample.In particular, we can test the covariance stationaritycondition and estimate the density of the uncondi- 0tional variance when this condition is satisﬁed. Un-der the GARCH(1,1) speciﬁcation, the process is co- 0.8 0.9 1.0 1.1variance stationary if α1 + β < 1, as shown by Boller-slev (1986, page 310). The term (α1 + β) is the degreeof persistence in the autocorrelation of the squares Figure 4: Posterior density of the persistence. Thewhich controls the intensity of the clustering in the histogram is based on 2 500 draws from the joint pos-variance process. With a value close to one, past terior distribution.shocks and past variances will have a longer impacton the future conditional variance. Prior restrictions and normal innovations To make inference on the persistence of thesquared process, we simply use the posterior sam- The control parameter addPriorConditions can be [ j] used to impose any type of constraints on the modelple and generate (α1 + β[ j] ) for each draw ψ[ j] in parameters ψ during the estimation. For instance,the posterior sample. The posterior density of the to ensure the estimation of a covariance stationarypersistence is plotted in Figure 4. The histogram is GARCH(1,1) model, the function should be deﬁnedleft-skewed with a median value of 0.923 and a max- asimum value of 1.050. In this case, the covariance sta-tionarity of the process is supported by the data. The > addPriorConditions <- function(psi)unconditional variance of the GARCH(1,1) model is + psi[2] + psi[3] < 1α0 /(1 − α1 − β) given that α1 + β < 1. Conditionallyupon existence, the posterior mean is 0.387 and the Finally, we can impose normality of the innova-90% credible interval is [0.274,1.378]. The empirical tions in a straightforward manner by setting the hy-variance is 0.323. perparameters λ = 100 and δ = 500 in the bayesGARCH function. Other probabilistic statements on interestingfunctions of the model parameters can be obtainedusing the joint posterior sample. Under speciﬁca- Practical advicetion (1), the conditional kurtosis is 3(ν − 2)/(ν − 4)provided that ν > 4. Using the posterior sample, we The estimation strategy implemented inestimate the posterior probability of existence for the bayesGARCH is fully automatic and does not re-conditional kurtosis to be 0.994. Therefore, the exis- quire any tuning of the MCMC sampler. This istence is clearly supported by the data. Conditionally certainly an appealing feature for practitioners. Theupon existence, the posterior mean of the kurtosis is generation of the Markov chains is however time8.21, the median is 5.84 and the 95% conﬁdence in- consuming and estimating the model over severalterval is [4.12,15.81], indicating heavier tails than for datasets on a daily basis can therefore take a signiﬁ-the normal distribution. The positive skewness of the cant amount of time. In this case, the algorithm canposterior for the conditional kurtosis is caused by a be easily parallelized, by running a single chain oncouple of very large values (the maximum simulated several processors. This can be easily achieved withvalue is 404.90). These correspond to draws with ν the package foreach (REvolution Computing, 2010),slightly larger than 4. Note that if one desires to rule for instance. Also, when the estimation is repeatedout large values for the conditional kurtosis before- over updated time series (i.e. time series with morehand, then one can set δ > 4 in the prior for ν. For recent observations), it is wise to start the algorithmexample, the choice δ = 4.5 would guarantee the kur- using the posterior mean or median of the param-tosis to be smaller than 15. eters obtained at the previous estimation step. TheThe R Journal Vol. 2/2, December 2010 ISSN 2073-4859
46.
46 C ONTRIBUTED R ESEARCH A RTICLES α0 α1 350 300 300 250 250 200 Frequency Frequency 200 150 150 100 100 50 50 0 0 0.02 0.04 0.06 0.08 0.1 0.2 0.3 0.4 0.5 β ν 400 200 300 150 Frequency Frequency 200 100 50 100 0 0 0.5 0.6 0.7 0.8 4 6 8 10 12 14Figure 3: Marginal posterior distributions of the model parameters. This histograms are based on 2 500 drawsfrom the joint posterior sample.impact of the starting values (burn-in phase) is likely foreign exchange rate log-returns.to be smaller and thus the convergence faster. Finally, note that as any MH algorithm, the sam-pler can get stuck at a given value, so that the chain Acknowledgementsdoes not move anymore. However, the sampleruses Taylor-made candidate densities that are espe- The authors acknowledge two anonymous review-cially constructed at each step, so it is almost im- ers and the associate editor, Martyn Plummer, forpossible for this MCMC sampler to get stuck at a helpful comments that have led to improvements ofgiven value for many subsequent draws. For ex- this note. David Ardia is grateful to the Swiss Na-ample, for our data set we still obtain posterior re- tional Science Foundation (under grant #FN PB FR1-sults that are almost equal to the results that we 121441) for ﬁnancial support. Any remaining errorsobtained for the reasonable default initial values or shortcomings are the authors’ responsibility.c(0.01,0.1,0.7,20), even if we take the very poorinitial values c(0.1,0.01,0.4,50). In the unlikelycase that such ill behaviour does occur, one could Bibliographyscale the data (to have standard deviation 1), or runthe algorithm with different initial values or a differ- D. Ardia. Financial Risk Management with Bayesianent random seed. Estimation of GARCH Models: Theory and Ap- plications, volume 612 of Lecture Notes in Eco- nomics and Mathematical Systems. Springer-Verlag,Summary Berlin, Germany, June 2008. ISBN 978-3-540-78656- 6. URL http://www.springer.com/economics/This note presented the Bayesian estimation of the econometrics/book/978-3-540-78656-6.GARCH(1,1) model with Student-t innovations us-ing the R package bayesGARCH. We illustrated the D. Ardia. bayesGARCH: Bayesian Estimation of theuse of the package with an empirical application to GARCH(1,1) Model with Student-t Innovations in R,The R Journal Vol. 2/2, December 2010 ISSN 2073-4859
47.
C ONTRIBUTED R ESEARCH A RTICLES 47 2007. URL http://CRAN.R-project.org/package= T. Nakatsuma. A Markov-chain sampling algo- bayesGARCH. R package version 1.00-05. rithm for GARCH models. Studies in Nonlin- ear Dynamics and Econometrics, 3(2):107–117, JulyT. Bollerslev. Generalized autoregressive conditional 1998. URL http://www.bepress.com/snde/vol3/ heteroskedasticity. Journal of Econometrics, 31(3): iss2/algorithm1/. Algorithm nr.1. 307–327, Apr. 1986.P. J. Deschamps. A ﬂexible prior distribution for M. Plummer, N. Best, K. Cowles, and K. Vines. Markov switching autoregressions with Student-t CODA: Convergence diagnosis and output anal- errors. Journal of Econometrics, 133(1):153–190, July ysis for MCMC. R News, 6(1):7–11, Mar. 2006. 2006. M. Plummer, N. Best, K. Cowles, and K. Vines. coda:R. F. Engle. Autoregressive conditional heteroscedas- Output analysis and diagnostics for MCMC, 2010. ticity with estimates of the variance of United URL http://CRAN.R-project.org/package=coda. Kingdom inﬂation. Econometrica, 50(4):987–1008, R package version 0.13-5. July 1982. REvolution Computing. foreach: Foreach looping con-A. Gelman and D. B. Rubin. Inference from itera- struct for R, 2009. URL http://CRAN.R-project. tive simulation using multiple sequences. Statisti- org/package=foreach. cal Science, 7(4):457–472, Nov. 1992.J. F. Geweke. Getting it right: Joint distribution tests C. Ritter and M. A. Tanner. Facilitating the Gibbs of posterior simulators. Journal of the American Sta- sampler: The Gibbs stopper and the Griddy-Gibbs tistical Association, 99(467):799–804, Sept. 2004. sampler. Journal of the American Statistical Associa- tion, 87(419):861–868, Sept. 1992.J. F. Geweke. Bayesian treatment of the independent Student-t linear model. Journal of Applied Economet- A. Trapletti and K. Hornik. tseries: Time series anal- rics, 8(S1):S19–S40, Dec. 1993. ysis and computational ﬁnance, 2009. URL http:A. Ghalanos. rgarch: Flexible GARCH modelling in //CRAN.R-project.org/package=tseries. R, 2010. URL http://r-forge.r-project.org/ D. Wuertz and Y. Chalabi. fGarch: Rmetrics - projects/rgarch. Autoregressive Conditional Heteroskedastic Modelling,W. K. Hastings. Monte Carlo sampling meth- 2009. URL http://CRAN.R-project.org/package= ods using Markov chains and their applications. fGarch. Biometrika, 57(1):97–109, Apr. 1970.G. Koop. Bayesian Econometrics. Wiley-Interscience, David Ardia London, UK, 2003. ISBN 0470845678. University of Fribourg, SwitzerlandN. Metropolis, A. W. Rosenbluth, M. N. Rosenbluth, david.ardia@unifr.ch A. H. Teller, and E. Teller. Equations of state cal- culations by fast computing machines. Journal of Lennart F. Hoogerheide Chemical Physics, 21(6):1087–1092, June 1953. Erasmus University Rotterdam, The NetherlandsThe R Journal Vol. 2/2, December 2010 ISSN 2073-4859
48.
48 C ONTRIBUTED R ESEARCH A RTICLEScudaBayesreg: Bayesian Computation inCUDAby Adelino Ferreira da Silva computing on NVIDIA manycore GPUs. The CUDA programming model follows the standard single- Abstract Graphical processing units are rapidly program multiple-data (SPMD) model. CUDA gaining maturity as powerful general parallel greatly simpliﬁes the task of parallel programming computing devices. The package cudaBayesreg by providing thread management tools that work uses GPU–oriented procedures to improve the as extensions of conventional C/C++ constructions. performance of Bayesian computations. The Automatic thread management removes the bur- paper motivates the need for devising high- den of handling the scheduling of thousands of performance computing strategies in the con- lightweight threads, and enables straightforward text of fMRI data analysis. Some features of the programming of the GPU cores. package for Bayesian analysis of brain fMRI data The package cudaBayesreg (Ferreira da Silva, are illustrated. Comparative computing perfor- 2010a) implements a Bayesian multilevel model mance ﬁgures between sequential and parallel for the analysis of brain fMRI data in the CUDA implementations are presented as well. environment. The statistical framework in cud- aBayesreg is built around a Gibbs sampler for multi-A functional magnetic resonance imaging (fMRI) level/hierarchical linear models with a normal priordata set consists of time series of volume data in 4D (Ferreira da Silva, 2010c). Multilevel modeling mayspace. Typically, volumes are collected as slices of be regarded as a generalization of regression meth-64 x 64 voxels. The most commonly used functional ods in which regression coefﬁcients are themselvesimaging technique relies on the blood oxygenation given a model with parameters estimated from datalevel dependent (BOLD) phenomenon (Sardy, 2007). (Gelman, 2006). As in SPM, the Bayesian modelBy analyzing the information provided by the BOLD ﬁts a linear regression model at each voxel, butsignals in 4D space, it is possible to make inferences uses uses multivariate statistics for parameter esti-about activation patterns in the human brain. The mation at each iteration of the MCMC simulation.statistical analysis of fMRI experiments usually in- The Bayesian model used in cudaBayesreg followsvolve the formation and assessment of a statistic im- a two–stage Bayes prior approach to relate voxelage, commonly referred to as a Statistical Paramet- regression equations through correlations betweenric Map (SPM). The SPM summarizes a statistic in- the regression coefﬁcient vectors (Ferreira da Silva,dicating evidence of the underlying neuronal acti- 2010c). This model closely follows the Bayesianvations for a particular task. The most common multilevel model proposed by Rossi, Allenby andapproach to SPM computation involves a univari- McCulloch (Rossi et al., 2005), and implementedate analysis of the time series associated with each in bayesm (Rossi and McCulloch., 2008). This ap-voxel. Univariate analysis techniques can be de- proach overcomes several limitations of the classi-scribed within the framework of the general linear cal SPM methodology. The SPM methodology tra-model (GLM) (Sardy, 2007). The GLM procedure ditionally used in fMRI has several important limi-used in fMRI data analysis is often said to be “mas- tations, mainly because it relies on classical hypoth-sively univariate”, since data for each voxel are in- esis tests and p–values to make statistical inferencesdependently ﬁtted with the same model. Bayesian in neuroimaging (Friston et al., 2002; Berger and Sel-methodologies provide enhanced estimation accu- lke, 1987; Vul et al., 2009). However, as is often theracy (Friston et al., 2002). However, since (non- case with MCMC simulations, the implementation ofvariational) Bayesian models draw on Markov Chain this Bayesian model in a sequential computer entailsMonte Carlo (MCMC) simulations, Bayesian esti- signiﬁcant time complexity. The CUDA implemen-mates involve a heavy computational burden. tation of the Bayesian model proposed here has been The programmable Graphic Processor Unit able to reduce signiﬁcantly the runtime processing(GPU) has evolved into a highly parallel proces- of the MCMC simulations. The main contributionsor with tremendous computational power and for the increased performance comes from the usevery high memory bandwidth (NVIDIA Corpora- of separate threads for ﬁtting the linear regressiontion, 2010b). Modern GPUs are built around a scal- model at each voxel in parallel.able array of multithreaded streaming multipro-cessors (SMs). Current GPU implementations en-able scheduling thousands of concurrently executing Bayesian multilevel modelingthreads. The Compute Uniﬁed Device Architecture(CUDA) (NVIDIA Corporation, 2010b) is a software We are interested in the following Bayesian mul-platform for massively parallel high-performance tilevel model, which has been analyzed by RossiThe R Journal Vol. 2/2, December 2010 ISSN 2073-4859
49.
C ONTRIBUTED R ESEARCH A RTICLES 49et al. (2005), and has been implemented as framework, some design options had to be assumedrhierLinearModel in bayesm. Start out with a gen- in order to properly balance optimization, memoryeral linear model, and ﬁt a set of m voxels as, constraints, and implementation complexity, while maintaining numerical correctness. Some design re- iid y i = Xi β i + i , i ∼ N 0, σi2 Ini , i = 1, . . . , m. (1) quirements for good performance on CUDA are as follows (NVIDIA Corporation, 2010a): (i) the soft-In order to tie together the voxels’ regression equa- ware should use a large number of threads; (ii) dif-tions, assume that the { β i } have a common prior dis- ferent execution paths within the same thread blocktribution. To build the Bayesian regression model we (warp) should be avoided; (iii) inter-thread commu-need to specify a prior on the {β i } coefﬁcients, and a nication should be minimized; (iv) data should beprior on the regression error variances {σi2 }. Follow- kept on the device as long as possible; (v) globaling Ferreira da Silva (2010c), specify a normal regres- memory accesses should be coalesced whenever pos-sion prior with mean ∆ zi for each β, sible; (vi) the use of shared memory should be pre- ferred to the use of global memory. We detail be- iid low how well these requirements have been met β i = ∆ zi + νi , νi ∼ N 0, Vβ , (2) in the code implementation. The ﬁrst requirementwhere z is a vector of nz elements, representing char- is easily met by cudaBayesreg. On the one hand,acteristics of each of the m regression equations. fMRI applications typically deal with thousands of The prior (2) can be written using the matrix form voxels. On the other hand, the package uses threeof the multivariate regression model for k regression constants which can be modiﬁed to suit the avail-coefﬁcients, able device memory, and the computational power B = Z∆ + V (3) of the GPU. Speciﬁcally, REGDIM speciﬁes the maxi- mum number of regressions (voxels), OBSDIM speci-where B and V are m × k matrices, Z is a m × nz ﬁes the maximum length of the time series observa-matrix, ∆ is a nz × k matrix. Interestingly, the prior tions, and XDIM speciﬁes the maximum number of re-(3) assumes the form of a second–stage regression, gression coefﬁcients. Requirements (ii) and (iii) arewhere each column of ∆ has coefﬁcients which de- satisﬁed by cudaBayesreg as well. Each thread ex-scribes how the mean of the k regression coefﬁcients ecutes the same code independently, and no inter-varies as a function of the variables in z. In (3), Z thread communication is required. Requirement (iv)assumes the role of a prior design matrix. is optimized in cudaBayesreg by using as much con- The proposed Bayesian model can be written stant memory as permitted by the GPU. Data thatdown as a sequence of conditional distributions (Fer- do not change between MCMC iterations are kept inreira da Silva, 2010c), constant memory. Thus, we reduce expensive mem- ory data transfers between host and device. For in- yi | Xi , β i , σi2 stance, the matrix of voxel predictors X (see (1)) is β i | zi , ∆, Vβ kept in constant memory. Requirement (v) is insuf- σi2 | νi , s2 i (4) ﬁciently met in cudaBayesreg. For maximum per- formance, memory accesses to global memory must Vβ | ν, V be coalesced. However, different fMRI data sets ¯ ∆ | Vβ , ∆, A. and parameterizations may generate data structures with highly variable dimensions, thus rendering co-Running MCMC simulations on the set of full condi- alescence difﬁcult to implement in a robust manner.tional posterior distributions (4), the full posterior for Moreover, GPU devices of Compute Capability 1.xall the parameters of interest may then be derived. impose hard memory coalescing restrictions. Fortu- nately, GPU devices of Compute Capability 2.x are expected to lift some of the most taxing memory co-GPU computation alescing constraints. Finally requirement (vi) is not met by the available code. The current kernel im-In this section, we describe some of the main de- plementation does not use shared memory. The rel-sign considerations underlying the code implemen- ative complexity of the Bayesian computation per-tation in cudaBayesreg, and the options taken for formed by each thread compared to the conventionalprocessing fMRI data in parallel. Ideally, the GPU arithmetic load assigned to the thread has preventedis best suited for computations that can be run us from exploiting shared memory operations. Theon numerous data elements simultaneously in par- task assigned to the kernel may be subdivided to re-allel (NVIDIA Corporation, 2010b). Typical text- duce kernel complexity. However, this option maybook applications for the GPU involve arithmetic easily compromise other optimization requirements,on large matrices, where the same operation is per- namely thread independence. As detailed in the nextformed across thousands of elements at the same paragraph, our option has been to keep the compu-time. Since the Bayesian model of computation out- tational design simple, by assigning the whole of thelined in the previous Section does not ﬁt well in thisThe R Journal Vol. 2/2, December 2010 ISSN 2073-4859
50.
50 C ONTRIBUTED R ESEARCH A RTICLESunivariate regression to the kernel. vein, random deviates from the chi-squared distri- The computational model has been speciﬁed as bution with ν number of degrees of freedom, χ2 (ν),a grid of thread blocks of dimension 64 in which a were generated from gamma deviates, Γ(ν/2, 1/2),separate thread is used for ﬁtting a linear regression following the method of Marsaglia and Tsang speci-model at each voxel in parallel. Maximum efﬁciency ﬁed in (Press et al., 2007).is expected to be achieved when the total number The next Sections provide details on how to useof required threads to execute in parallel equals the cudaBayesreg (Ferreira da Silva, 2010b) for fMRInumber of voxels in the fMRI data set, after appropri- data analysis. Two data sets, which are included andate masking has been done. However, this approach documented in the complementary package cud-typically calls for the parallel execution of several aBayesregData (Ferreira da Silva, 2010b), have beenthousands of threads. To keep computational re- used in testing: the ‘fmri’ and the ‘swrfM’ data sets.sources low, while maintaining signiﬁcant high efﬁ- We performed MCMC simulations on these dataciency it is generally preferable to process fMRI data sets using three types of code implementations forslice-by-slice. In this approach, slices are processed the Bayesian multilevel model speciﬁed before: ain sequence. Voxels in slices are processed in paral- (sequential) R-language version, a (sequential) C-lel. Thus, for slices of dimension 64 × 64, the required language version, and a CUDA implementation.number of parallel executing threads does not exceed Comparative runtimes for 3000 iterations in these4096 at a time. The main computational bottleneck in three situations, for the data sets ‘fmri’ and ‘swrfM’,sequential code comes from the necessity of perform- are as follows.ing Gibbs sampling using a (temporal) univariate re-gression model for all voxel time series. We coded Runtimes in seconds for 3000 iterations:this part of the MCMC computation as device code,i.e. a kernel to be executed by the CUDA threads.CUDA threads execute on the GPU device that oper- slice R-code C-code CUDA fmri 3 1327 224 22ates as a coprocessor to the host running the MCMC swrfM 21 2534 309 41simulation. Following the proposed Bayesian model(4), each thread implements a Gibbs sampler to drawfrom the posterior of a univariate regression with a Speed-up factors between the sequential versionsconditionally conjugate prior. The host code is re- and the parallel CUDA implementation are summa-sponsible for controlling the MCMC simulation. At rized next.each iteration, the threads perform one Gibbs itera-tion for all voxels in parallel, to draw the threads’ Comparative speedup factors:estimators for the regression coefﬁcients β i as speci-ﬁed in (4). In turn, the host, based on the simulatedβ i values, draws from the posterior of a multivariate C-vs-R CUDA-vs-C CUDA-vs-Rregression model to estimate Vβ and ∆. These values fmri 6.0 10.0 60.0 swrfM 8.2 7.5 61.8are then used to drive the next iteration. The bulk of the MCMC simulations for Bayesiandata analysis is implemented in the kernel (device In these tests, the C-implementation provided,code). Most currently available RNGs for the GPU approximately, a 7.6× mean speedup factor relativetend to be have too high time- and space-complexity to the equivalent R implementation. The CUDA im-for our purposes. Therefore, we implemented and plementation provided a 8.8× mean speedup factortested three different random number generators relative to the equivalent C implementation. Over-(RNGs) in device code, by porting three well-known all, the CUDA implementation yielded a signiﬁcantRNGs to device code. Marsaglia’s multicarry RNG 60× speedup factor. The tests were performed on a(Marsaglia, 2003) follows the R implementation, is notebook equipped with a (low–end) graphics card:the fastest one, and is used by default; Brent’s a ‘GeForce 8400M GS’ NVIDIA device. This GPU de-RNG (Brent, 2007) has higher quality but is not- vice has just 2 multiprocessors, Compute Capabilityso-fast; Matsumoto’s Mersenne Twister (Matsumoto 1.1, and delivers single–precision performance. Theand Nishimura, 1998) is slower than the others. In compiler ﬂags used in compiling the source code areaddition, we had to ensure that different threads re- detailed in the package’s Makefile. In particular, theceive different random seeds. We generated random optimization ﬂag -O3 is set there. It is worth notingseeds for the threads by combining random seeds that the CUDA implementation in cudaBayesreg af-generated by the host with the threads’ unique iden- fords much higher speedups. First, the CUDA im-tiﬁcation numbers. Random deviates from the nor- plementation may easily be modiﬁed to process allmal (Gaussian) distribution and chi-squared distri- voxels of a fMRI volume in parallel, instead of pro-bution had to be implemented in device code as well. cessing data in a slice-by-slice basis. Second, GPUsRandom deviates from the normal distribution were with 16 multiprocessors and 512 CUDA cores andgenerated using the Box-Muller method. In a similar Compute Capability 2.0 are now available.The R Journal Vol. 2/2, December 2010 ISSN 2073-4859
51.
C ONTRIBUTED R ESEARCH A RTICLES 51Experiments using the fmri testdatasetThe data set ‘fmri.nii.gz’ is available from the FM-RIB/FSL site (www.fmrib.ox.ac.uk/fsl). This dataset is from an auditory–visual experiment. Au-ditory stimulation was applied as an alternating“boxcar” with 45s-on-45s-off and visual stimula-tion was applied as an alternating “boxcar” with30s-on-30s-off. The data set includes just 45 timepoints and 5 slices from the original 4D data.The ﬁle ‘fmri_ﬁltered_func_data.nii’ included in cud-aBayesregData was obtained from ‘fmri.nii.gz’ byapplying FSL/FEAT pre-preprocessing tools. Forinput/output of NIFTI formatted fMRI data setscudaBayesreg depends on the R package oro.nifti(Whitcher et al., 2010). The following code runs theMCMC simulation for slice 3 of the fmri dataset, andsaves the result.> require("cudaBayesreg") Figure 2: PPM images for the auditory stimulation> slicedata <- read.fmrislice(fbase = "fmri",+ slice = 3, swap = TRUE)> ymaskdata <- premask(slicedata) To show the ﬁtted time series for a (random) ac-> fsave <- "/tmp/simultest1.sav" tive voxel, as depicted in Figure 3, we use the code:> out <- cudaMultireg.slice(slicedata,+ ymaskdata, R = 3000, keep = 5, > post.tseries(out = out, slicedata = slicedata,+ nu.e = 3, fsave = fsave, zprior = FALSE) + ymaskdata = ymaskdata, vreg = 2) We may extract the posterior probability (PPM) range pm2: -1.409497 1.661774images for the visual (vreg=2) and auditory (vreg=4)stimulation as follows (see Figures 1 and 2).> post.ppm(out = out, slicedata = slicedata,+ ymaskdata = ymaskdata, vreg = 2,+ col = heat.colors(256)) Figure 3: Fitted time–series for an active voxel Summary statistics for the posterior mean values of regression coefﬁcient vreg=2, are presented next. Figure 1: PPM images for the visual stimulation The same function plots the histogram of the pos- terior distribution for vreg=2, as represented in Fig-> post.ppm(out = out, slicedata = slicedata, ure 4.+ ymaskdata = ymaskdata, vreg = 4,+ col = heat.colors(256)) > post.simul.hist(out = out, vreg = 2)The R Journal Vol. 2/2, December 2010 ISSN 2073-4859
52.
52 C ONTRIBUTED R ESEARCH A RTICLESCall: + 4, 1, 1) + 0.1) density.default(x = pm2) > xlim = range(c(x1$beta, x2$beta)) > ylim = range(c(x1$yrecmean, x2$yrecmean))Data: pm2 (1525 obs.); Bandwidth bw = 0.07947 > plot(x1$beta, x1$yrecmean, type = "p", + pch = "+", col = "violet", x y + ylim = ylim, xlim = xlim, Min. :-1.6479 Min. :0.0000372 + xlab = expression(beta), ylab = "y") 1st Qu.:-0.7609 1st Qu.:0.0324416 > legend("topright", expression(paste(nu, Median : 0.1261 Median :0.1057555 + "=3")), bg = "seashell") Mean : 0.1261 Mean :0.2815673 > plot(x2$beta, x2$yrecmean, type = "p", 3rd Qu.: 1.0132 3rd Qu.:0.4666134 + pch = "+", col = "blue", ylim = ylim, Max. : 1.9002 Max. :1.0588599 + xlim = xlim, xlab = expression(beta),[1] "active range:" + ylab = "y")[1] 0.9286707 1.6617739 > legend("topright", expression(paste(nu,[1] "non-active range:" + "=45")), bg = "seashell")[1] -1.4094965 0.9218208 > par(mfrow = c(1, 1))hpd (95%)= -0.9300451 0.9286707 Figure 5: Shrinkage assessment: variability of meanFigure 4: Histogram of the posterior distribution of predictive values for ν = 3 and ν = 45.the regression coefﬁcient β 2 (slice 3). Experiments using the SPM audi- An important feature of the Bayesian model usedin cudaBayesreg is the shrinkage induced by the tory datasethyperprior ν on the estimated parameters. We In this Section, we exemplify the analysis of the ran-may assess the adaptive shrinkage properties of the dom effects distribution ∆, following the speciﬁca-Bayesian multilevel model for two different values of tion of cross-sectional units (group information) inν as follows. the Z matrix of the statistical model. The Bayesian> nu2 <- 45 multilevel statistical model allows for the analysis> fsave2 <- "/tmp/simultest2.sav" of random effects through the speciﬁcation of the Z> out2 <- cudaMultireg.slice(slicedata, matrix for the prior in (2). The dataset with pre-+ ymaskdata, R = 3000, keep = 5, ﬁx swrfM (argument fbase="swrfM") in the pack-+ nu.e = nu2, fsave = fsave2, age’s data directory, include mask ﬁles associated+ zprior = F) with the partition of the fMRI dataset ‘swrfM’ in 3> vreg <- 2 classes: cerebrospinal ﬂuid (CSF), gray matter (GRY)> x1 <- post.shrinkage.mean(out = out,+ slicedata$X, vreg = vreg, and white matter (WHT). As before, we begin by+ plot = F) loading the data and running the simulation. This> x2 <- post.shrinkage.mean(out = out2, time, however, we call cudaMultireg.slice with the+ slicedata$X, vreg = vreg, argument zprior=TRUE. This argument will launch+ plot = F) read.Zsegslice, that reads the segmented images> par(mfrow = c(1, 2), mar = c(4, (CSF/GRY/WHT) to build the Z matrix.The R Journal Vol. 2/2, December 2010 ISSN 2073-4859
53.
C ONTRIBUTED R ESEARCH A RTICLES 53> fbase <- "swrfM"> slice <- 21> slicedata <- read.fmrislice(fbase = fbase,+ slice = slice, swap = TRUE)> ymaskdata <- premask(slicedata)> fsave3 <- "/tmp/simultest3.sav"> out <- cudaMultireg.slice(slicedata,+ ymaskdata, R = 3000, keep = 5,+ nu.e = 3, fsave = fsave3,+ zprior = TRUE) We conﬁrm that areas of auditory activation havebeen effectively selected by displaying the PPM im-age for regression variable vreg=2. Figure 7: Draws of the mean of the random effects distribution> post.ppm(out = out, slicedata = slicedata,+ ymaskdata = ymaskdata, vreg = 2,+ col = heat.colors(256)) Random effects plots for each of the 3 classes are obtained by calling, > post.randeff(out, classnames = c("CSF", + "GRY", "WHT")) Plots of the random effects associated with the 3 classes are depicted in Figures 8–10. Figure 6: PPM images for the auditory stimulation Plots of the draws of the mean of the random ef-fects distribution are presented in Figure 7. Figure 8: Draws of the random effects distribution> post.randeff(out) for class CSFThe R Journal Vol. 2/2, December 2010 ISSN 2073-4859
54.
54 C ONTRIBUTED R ESEARCH A RTICLES Bibliography J. Berger and T. Sellke. Testing a point null hypothe- sis: the irreconciability of p values and evidence. Journal of the American Statistical Association, 82: 112–122, 1987. R. P. Brent. Some long-period random number gen- erators using shifts and xors. ANZIAM Journal, 48 (CTAC2006):C188–C202, 2007. URL http:// wwwmaths.anu.edu.au/~brent. A. R. Ferreira da Silva. cudaBayesreg: CUDA Par- allel Implementation of a Bayesian Multilevel Model for fMRI Data Analysis, 2010a. URL http://CRAN. R-project.org/package=cudaBayesreg. R pack- age version 0.3-8. A. R. Ferreira da Silva. cudaBayesregData: Data sets for the examples used in the package cud- aBayesreg, 2010b. URL http://CRAN.R-project. org/package=cudaBayesreg. R package versionFigure 9: Draws of the random effects distribution 0.3-8.for class GRY A. R. Ferreira da Silva. A Bayesian multilevel model> post.randeff(out, classnames = c("CSF", for fMRI data analysis. Comput. Methods Programs+ "GRY", "WHT")) Biomed., in press, 2010c. K. J. Friston, W. Penny, C. Phillips, S. Kiebel, G. Hin- ton, and J. Ashburner. Classical and Bayesian In- ference in Neuroimaging: Theory. NeuroImage, 16: 465– 483, 2002. A. Gelman. Multilevel (hierarchical) modeling: What it can and cannot do. Technometrics, 48(3):432–435, Aug. 2006. G. Marsaglia. Random number generators. Journal of Modern Applied Statistical Methods, 2, May 2003. M. Matsumoto and T. Nishimura. Mersenne twister: a 623-dimensionally equidistributed uni- form pseudorandom number generator. ACM Transactions on Modeling and Computer Simulation, 8:3–30, Jan. 1998. NVIDIA Corporation. CUDA C Best Practices Guide Version 3.2, Aug. 2010a. URL http://www.nvidia. com/CUDA.Figure 10: Draws of the random effects distribution NVIDIA Corporation. NVIDIA CUDA Programmingfor class WHT Guide, Version 3.2, Aug. 2010b. URL http://www. nvidia.com/CUDA.Conclusion W. H. Press, S. A. Teukolsky, W. T. Vetterling, and B. P. Flannery. Numerical Recipes, The Art of Scien-The CUDA implementation of the Bayesian model in tiﬁc Computing. CUP, third edition, 2007.cudaBayesreg has been able to reduce signiﬁcantly P. Rossi and R. McCulloch. bayesm: Bayesian Infer-the runtime processing of MCMC simulations. For ence for Marketing/Micro-econometrics, 2008. URLthe applications described in this paper, we have http://faculty.chicagogsb.edu/peter.rossi/obtained speedup factors of 60× compared to the research/bsm.html. R package version 2.2-2.equivalent sequential R code. The results point outthe enormous potential of parallel high-performance P. E. Rossi, G. Allenby, and R. McCulloch. Bayesiancomputing strategies on manycore GPUs. Statistics and Marketing. John Wiley and Sons, 2005.The R Journal Vol. 2/2, December 2010 ISSN 2073-4859
55.
C ONTRIBUTED R ESEARCH A RTICLES 55G. E. Sardy. Computing Brain Activity Maps from fMRI Rigorous - NIfTI Input / Output, 2010. URL http:// Time-Series Images. Cambridge University Press, CRAN.R-project.org/package=oro.nifti. R Pack- 2007. age Version 0.1.4.E. Vul, C. Harris, P. Winkielman, and H. Pashler. Puz- zlingly high correlations in fMRI studies of emo- Adelino Ferreira da Silva tion, personality, and social cognition. Perspectives Universidade Nova de Lisboa on Psychological Science, 4(3):274–290, 2009. Faculdade de Ciências e Tecnologia PortugalB. Whitcher, V. Schmid, and A. Thornton. oro.nifti: afs@fct.unl.ptThe R Journal Vol. 2/2, December 2010 ISSN 2073-4859
56.
56 C ONTRIBUTED R ESEARCH A RTICLESbinGroup: A Package for Group Testingby Christopher R. Bilder, Boan Zhang, Frank have been written in familiar formats to those whereSchaarschmidt and Joshua M. Tebbs individual testing is used (e.g., binom.confint() in binom (Dorai-Raj, 2009) or glm() in stats). Abstract When the prevalence of a disease or of some other binary characteristic is small, group testing (also known as pooled testing) is fre- Homogeneous populations quently used to estimate the prevalence and/or to identify individuals as positive or negative. Group testing has been used traditionally in settings We have developed the binGroup package as where one overall prevalence of a binary character- the ﬁrst package designed to address the esti- istic within a homogeneous population is of interest. mation problem in group testing. We present Typically, one assumes that each individual is inde- functions to estimate an overall prevalence for pendent and has the same probability p of the charac- a homogeneous population. Also, for this set- teristic, so that p is the overall prevalence. In the next ting, we have functions to aid in the very impor- section, we will consider the situation where individ- tant choice of the group size. When individu- uals have different probabilities of positivity. Here, als come from a heterogeneous population, our we let θ denote the probability that a group of size s is group testing regression functions can be used positive. One can show then p = 1 − (1 − θ )1/s when to estimate an individual probability of disease diagnostic testing is perfect. This equation plays positivity by using the group observations only. a central role in making estimates and inferences We illustrate our functions with data from a mul- about individuals when only the group responses are tiple vector transfer design experiment and a hu- known and each individual is within only one group. man infectious disease prevalence study. We have written two functions to calculate a con- ﬁdence interval for p. First, the bgtCI() function cal- culates intervals for p when a common group size isIntroduction used throughout the sample. For example, Ornaghi et al. (1999) estimate the probability that the femaleGroup testing, where individuals are composited Delphacodes kuscheli (planthopper) transfers the Malinto pools to screen for a binary characteristic, has Rio Cuarto (MRC) virus to maize crops. In stage 4 ofa long history of successful application in areas the experiment, n = 24 enclosed maize plants eachsuch as human infectious disease detection, veteri- had s = 7 planthopper vectors placed on them fornary screening, drug discovery, and insect vector forty-eight hours and there were y = 3 plants thatpathogen transmission (Pilcher et al., 2005; Peck, tested positive for the MRC virus after two months.2006; Remlinger et al., 2006; Tebbs and Bilder, 2004). The 95% conﬁdence interval for the probability ofGroup testing works well in these settings because transmission p is calculated bythe prevalence is usually small and individual spec- > bgtCI(n = 24, y = 3, s = 7,imens (e.g., blood, urine, or cattle ear notches) can + conf.level = 0.95,be composited without loss of diagnostic test accu- + alternative = "two.sided",racy. Group testing is performed often by assigning + method = "Score")each individual to a group and testing every groupfor a positive or negative outcome of the binary char- 95 percent Score confidence interval:acteristic. Using these group responses alone, esti- [ 0.006325, 0.05164 ]mates of overall prevalence or subject-speciﬁc prob- Point estimate: 0.0189abilities of positivity can be found. When furtherindividual identiﬁcation of the binary characteristic where the score (Wilson) interval was used. Whileis of interest, re-testing of specimens within positive the score interval is usually one of the best in terms ofgroups can be performed to decode the individual coverage (Tebbs and Bilder, 2004), other intervals cal-positives from the negatives. There are other vari- culated by bgtCI() include the Clopper-Pearson, theants to how group testing is applied, and some will asymptotic second-order corrected, and the Wald.be discussed in this paper. A recent review of group The maximum likelihood estimate for p is 1 − (1 −testing for estimation and identiﬁcation is found in 3/24)1/7 = 0.0189.Hughes-Oliver (2006). Group testing is applied usually with equally Our binGroup package (Zhang et al., 2010) is the sized groups. When a common group size is notﬁrst dedicated to the group testing estimation prob- used, perhaps due to physical or design constraints,lem within homogeneous or heterogeneous popula- our bgtvs() function can calculate the exact intervaltions. We also provide functions to determine the op- proposed by Hepworth (1996, equation 5). The ar-timal group size based on prior knowledge of what guments to bgtvs() include the following vectors: s,the overall prevalence may be. All of our functions the different group sizes occurring in the design; n,The R Journal Vol. 2/2, December 2010 ISSN 2073-4859
57.
C ONTRIBUTED R ESEARCH A RTICLES 57the corresponding numbers of groups for each group (2001) have both proposed ways to ﬁt these models.size; and y, the corresponding numbers of observed Vansteelandt et al. (2000) use a likelihood functionpositive groups for each group size. Note that the al- written in terms of the initial group responses andgorithm becomes computationally expensive when maximize it to obtain the maximum likelihood esti-the number of different group sizes is more than mates of the model parameters. This ﬁtting proce-three. dure can not be used when re-tests are available. Xie One of the most important design considerations (2001) writes the likelihood function in terms of theis the choice of the group size. Choosing a group size unobserved individual responses and uses the EMthat is too small may result in few groups testing pos- algorithm for estimation. This approach has an ad-itive, so more tests are used than necessary. Choos- vantage over Vansteelandt et al. (2000) because it caning a group size that is too large may result in almost be used in more complicated settings such as whenall groups testing positive, which leads to a poor es- re-tests are available or when individuals appear intimate of p. As a rule of thumb, one tries to choose a multiple groups (e.g., matrix or array-based pool-group size so that about half of the groups test pos- ing). However, while Xie’s ﬁtting procedure is moreitive. More formally, one can choose an s that mini- general, it can be very slow to converge for somemizes the mean square error (MSE) for a ﬁxed n and a group testing protocols.prior estimate of p (Swallow, 1985). If we use a prior The gtreg() function ﬁts group testing regres-prevalence estimate of 0.0189, 24 groups, and a max- sion models in situations where individuals appearimum possible group size of 100, our estDesign() in only one group and no re-tests are performed. Thefunction ﬁnds the optimal choice of s to be 43: function call is very similar to that of glm() in the> estDesign(n = 24, smax = 100, p.tr = 0.0189) stats package. Additional arguments include sensi-group size s with minimal mse(p) = 43 tivity and speciﬁcity of the group test; group num- bers for the individuals, and speciﬁcation of either$varp [1] 3.239869e-05 the Vansteelandt or Xie ﬁtting methods. Both model- ﬁtting methods will produce approximately the same$mse [1] 3.2808e-05 estimates and corresponding standard errors. We illustrate the gtreg() function with data$bias [1] 0.0006397784 from Vansteelandt et al. (2000). The data were ob- tained through a HIV surveillance study of pregnant$exp [1] 0.01953978 women in rural parts of Kenya. For this example, we model the probability that a women is HIV positiveThe function provides the corresponding variance, using the covariates age and highest attained educa-MSE, bias, and expected value for the maximum like- tion level (treated as ordinal). The data structure islihood estimator of p. While s = 43 is optimal forthis example, large group sizes can not necessarily > data(hivsurv)be used in practice (e.g., dilution effects may prevent > tail(hivsurv[,c(3,5,6:8)], n = 7)using a large group size), but this can still be used as AGE EDUC HIV gnum groupresa goal. 422 29 3 1 85 1 Our other functions for homogeneous popula- 423 17 2 0 85 1tion settings include bgtTest(), which calculates a 424 18 2 0 85 1p-value for a hypothesis test involving p. Also, 425 18 2 0 85 1bgtPower() calculates the power of the hypothesis 426 22 3 0 86 0test. Corresponding to bgtPower(), the nDesign() 427 30 2 0 86 0and sDesign() functions calculate the power with in- 428 34 3 0 86 0creasing n or s, respectively, with plot.bgtDesign() Each individual within a group (gnum is theproviding a plot. These functions allow researchers group number) is given the same group responseto design their own experiment in a similar manner (groupres) within the data set. For example, indi-to that described in Schaarschmidt (2007). vidual #422 is positive (1) for HIV, and this leads to all individuals within group #85 to have a positiveHeterogeneous populations group response. Note that the individual HIV re- sponses are known here because the purpose of theWhen covariates for individuals are available, we original study was to show group testing works ascan model the probability of positivity as with any well as individual testing (Verstraeten et al., 1998).binary regression model. However, the complicat- Continuing, the gtreg() function ﬁts the model anding aspect here is that only the group responses the ﬁt is summarized with summary():may be available. Also, if both group responses > fit1 <- gtreg(formula = groupres ~ AGE + EDUC,and responses from re-tests are available, the cor- + data = hivsurv, groupn = gnum, sens = 0.99,relation between these responses makes the analy- + spec = 0.95, linkf = "logit",sis more difﬁcult. Vansteelandt et al. (2000) and Xie + method = "Vansteelandt")The R Journal Vol. 2/2, December 2010 ISSN 2073-4859
58.
58 C ONTRIBUTED R ESEARCH A RTICLES> summary(fit1) the positive row and column intersections, these re- tests can be included when ﬁtting the model. NoteCall: gtreg(formula = groupres ~ AGE + EDUC, that the speed of model convergence can be im- data = hivsurv, groupn = gnum, sens = 0.99, proved by including re-tests. Within gtreg.mp(), spec = 0.95, linkf = "logit", we implement the EM algorithm given by Xie (2001) method = "Vansteelandt") for matrix pooling settings where individual re-tests may or may not be performed. Due to the compli-Deviance Residuals: Min 1Q Median 3Q Max cated response nature of matrix pooling, this algo--1.1811 -0.9384 -0.8219 1.3299 1.6696 rithm involves using Gibbs sampling for the E-step in order to approximate the conditional expected val-Coefficients: ues of a positive individual response. Estimate Std. Error z value Pr(>|z|) Through personal communication with Minge(Intercept) -2.99039 1.59911 -1.870 0.0615 . Xie, we discovered that while he suggested theAGE -0.05163 0.06748 -0.765 0.4443 model ﬁtting procedure could be used for matrixEDUC 0.73621 0.43885 1.678 0.0934 . pooling, he had not implemented it; therefore, to our--- knowledge, this is the ﬁrst time group testing regres-Signif. codes: 0 *** 0.001 ** 0.01 * sion models for a matrix pooling setting have been 0.05 . 0.1 1 put into practice. Zhang and Bilder (2009) provide Null deviance: 191.4 on 427 degrees of freedom a technical report on the model ﬁtting details. WeResidual deviance: 109.4 on 425 degrees of freedom hope that the gtreg.mp() function will encourage re-AIC: 115.4 searchers to include covariates when performing ma- trix pooling rather than assume one common p, asNumber of iterations in optim(): 138 has been done in the past. The sim.mp() function simulates matrix poolingThe results from gtreg() are stored in fit1 here, data. In order to simulate the data for a 5 × 6 and awhich has a "gt" class type. The estimated model 4 × 5 matrix, we can implement the following:can be written as > set.seed(9128) logit( pik ) = −2.99 − 0.0516Ageik + 0.7362Educik ˆ > sa1a <- sim.mp(par = c(-7,0.1), n.row = c(5,4), + linkf = "logit", n.col = c(6,5), sens = 0.95, ˆwhere pik is the estimated probability that the ith in- + spec = 0.95)dividual in the kth group is positive. In addition to > sa1 <- sa1a$dframethe summary.gt() function, method functions to ﬁnd > head(sa1)residuals and predicted values are available. x col.resp row.resp coln rown arrayn retest We have also written a function sim.g() that sim- 1 29.961 0 0 1 1 1 NA 2 61.282 0 1 1 2 1 NAulates group test responses for a given binary regres- 3 34.273 0 1 1 3 1 NAsion model. The options within it allow for a user- 4 46.190 0 0 1 4 1 NAspeciﬁed set of covariates or one covariate that is 5 39.438 0 1 1 5 1 NAsimulated from a gamma distribution. Individuals 6 45.880 1 0 2 1 1 NAare randomly put into groups of a speciﬁed size bythe user. The function can also be used to simulate where sa1 contains the column, row, and re-test re-group testing data from a homogeneous population sponses along with one covariate x. The coln, rown,by specifying zero coefﬁcients for covariates. and arrayn variables are the column, row, and array One of the most important innovations in group numbers, respectively, for the responses. The covari-testing is the development of matrix or array-based ate is simulated using the default gamma distribu-pooling (Phatarfod and Sudbury, 1994; Kim et al., tion with shape parameter 20 and scale parameter2007). In this setting, specimens are placed into a 2 (a user-speciﬁed matrix of covariates can also bematrix-like grid so that they can be pooled within used with the function). The par argument gives theeach row and within each column. Potentially pos- coefﬁcients in the model of logit( pijk ) = −7 + 0.1xijkitive individuals occur at the intersection of positive where xijk and pijk are the covariate and positivityrows and columns. If identiﬁcation of these positive probability, respectively, for the individual in row i,individuals is of interest, individual re-testing can be column j, and array k. We ﬁt a model to the data us-done on specimens at these intersections. With the ing the following:advent of high-throughput screening, matrix pooling > fit1mp <- gtreg.mp(formula = cbind(col.resp,has become easier to perform because pooling and + row.resp) ~ x, data = sa1, coln = coln,testing is done with minimal human intervention. + rown = rown, arrayn = arrayn, sens = 0.95, The gtreg.mp() function ﬁts a group testing re- + spec = 0.95, linkf = "logit", n.gibbs = 2000)gression model in a matrix pooling setting. The row > coef(fit1mp)and column group responses can be used alone to ﬁt (Intercept) xthe model. If individual re-testing is performed on -6.23982684 0.08659878The R Journal Vol. 2/2, December 2010 ISSN 2073-4859
59.
C ONTRIBUTED R ESEARCH A RTICLES 59The coefﬁcients are similar to the ones used to simu- J. Hughes-Oliver. Pooling experiments for bloodlate the data. Methods to summarize the model’s ﬁt screening and drug discovery. In Screening: Meth-and to perform predictions are also available. ods for Experimentation in Industry, Drug Discovery, and Genetics, edited by A. Dean and S. Lewis, New York: Springer.Conclusion H. Kim, M. Hudgens, J. Dreyfuss, D. Westreich, andGroup testing is used in a vast number of applica- C. Pilcher. Comparison of group testing algo-tions where a binary characteristic is of interest and rithms for case identiﬁcation in the presence of testindividual specimens can be composited. Our pack- error. Biometrics, 63:1152–1163, 2007.age combines together the most often used and rec-ommended conﬁdence intervals for p. Also, our J. Ornaghi, G. March, G. Boito, A. Marinelli, A. Be-package makes the regression methods of Vanstee- viacqua, J. Giuggia, and S. Lenardon. Infectivitylandt et al. (2000) and Xie (2001) easily accessible in natural populations of Delphacodes kuscheli vec-for the ﬁrst time. We hope this will encourage re- tor of ’Mal Rio Cuarto’ virus. Maydica, 44:219–223,searchers to take into account potentially important 1999.covariates in a group testing setting. C. Peck. Going after BVD. Beef, 42:34–44, 2006. We see the current form of the binGroup pack-age as a beginning rather than an end to meeting re- R. Phatarfod and A. Sudbury. The use of a square ar-searcher needs. There are many additions that would ray scheme in blood testing. Statistics in Medicine,be further helpful to researchers. For example, there 13:2337–2343, 1994.are a number of re-testing protocols, such as halving(Gastwirth and Johnson, 1994) or sub-dividing posi- C. Pilcher, S. Fiscus, T. Nguyen, E. Foust, L. Wolf,tive groups of any size (Kim et al., 2007), that could D. Williams, R. Ashby, J. O’Dowd, J. McPherson,be implemented, but would involve a large amount B. Stalzer, L. Hightow, W. Miller, J. Eron, M. Co-of new programming due to the complex nature of hen, and P. Leone. Detection of acute infectionsthe re-testing. Also, the binGroup package does not during HIV testing in North Carolina. New Eng-have any functions solely for individual identiﬁca- land Journal of Medicine, 352:1873–1883, 2005.tion of a binary characteristic. For example, the opti- K. Remlinger, J. Hughes-Oliver, S. Young, andmal group size to use for identiﬁcation alone is usu- R. Lam. Statistical design of pools using optimalally different to the optimal group size to use when coverage and minimal collision. Technometrics, 48:estimating p. Given these desirable extensions, we 133–143, 2006.encourage others to send us their functions or writenew functions of their own. We would be willing to W. Swallow. Group testing for estimating infec-work with anyone to include them within the bin- tion rates and probabilities of disease transmis-Group package. This would enable all researchers sion. Phytopathology, 75:882–889, 1985.to have one group testing package rather than manysmall packages with much duplication. F. Schaarschmidt. Experimental design for one-sided conﬁdence intervals or hypothesis tests in bino- mial group testing. Communications in Biometry andAcknowledgements Crop Science, 2:32–40, 2007.This research is supported in part by Grant R01 J. Tebbs and C. Bilder. Conﬁdence interval proce-AI067373 from the National Institutes of Health. dures for the probability of disease transmission in multiple-vector-transfer designs. Journal of Agri- cultural, Biological, and Environmental Statistics, 9:Bibliography 75–90, 2004.S. Dorai-Raj. binom: Binomial Conﬁdence Intervals for S. Vansteelandt, E. Goetghebeur, and T. Verstraeten. Several Parameterizations, 2009. URL http://cran. Regression models for disease prevalence with di- r-project.org/web/packages/binom. R package agnostic tests on pools of serum samples. Biomet- version 1.0-4. rics, 56:1126–1133, 2000.J. Gastwirth and W. Johnson. Screening with cost- T. Verstraeten, B. Farah, L. Duchateau, and R. Matu. effective quality control: Potential applications to Pooling sera to reduce the cost of HIV surveil- HIV and drug testing. Journal of the American Sta- lance: a feasibility study in a rural Kenyan dis- tistical Association, 89:972–981, 1994. trict. Tropical Medicine and International Health, 3: 747–750, 1998.G. Hepworth. Exact conﬁdence intervals for propor- tions estimated by group testing. Biometrics, 52: M. Xie. Regression analysis of group testing samples. 1134–1146, 1996. Statistics in Medicine, 20:1957–1969, 2001.The R Journal Vol. 2/2, December 2010 ISSN 2073-4859
60.
60 C ONTRIBUTED R ESEARCH A RTICLESB. Zhang and C. Bilder. The EM algorithm for group Boan Zhang testing regression models under matrix pool- Department of Statistics ing. Technical Report, University of Nebraska- University of Nebraska-Lincoln, Lincoln, NE Lincoln, Department of Statistics. URL http:// boan.zhang@huskers.unl.edu www.chrisbilder.com/grouptesting.B. Zhang, C. Bilder, and F. Schaarschmidt. binGroup: Frank Schaarschmidt Evaluation and experimental design for binomial group Institut für Biostatistik testing, 2010. URL http://cran.r-project.org/ Leibniz Universität Hannover, Germany web/packages/binGroup. R package version 1.0-5. schaarschmidt@biostat.uni-hannover.deChristopher R. Bilder Joshua M. TebbsDepartment of Statistics Department of StatisticsUniversity of Nebraska-Lincoln, Lincoln, NE University of South Carolina, Columbia, SCchris@chrisbilder.com tebbs@stat.sc.eduThe R Journal Vol. 2/2, December 2010 ISSN 2073-4859
61.
C ONTRIBUTED R ESEARCH A RTICLES 61The RecordLinkage Package: DetectingErrors in Databy Murat Sariyar and Andreas Borg age to a classiﬁcation problem. The package RecordLinkage is designed to facil- Abstract Record linkage deals with detecting itate the application of record linkage in R. The idea homonyms and mainly synonyms in data. The for this package evolved whilst using R for record package RecordLinkage provides means to per- linkage of data stemming from a German cancer reg- form and evaluate different record linkage meth- istry. An evaluation of different methods thereafter ods. A stochastic framework is implemented lead to a large number of functions and data struc- which calculates weights through an EM al- tures. The organisation of these functions and data gorithm. The determination of the necessary structures as an R package eases the evaluation of thresholds in this model can be achieved by tools record linkage methods and facilitates the applica- of extreme value theory. Furthermore, machine tion of record linkage to different data sets. These learning methods are utilized, including deci- are the main goals behind the package described in sion trees (rpart), bootstrap aggregating (bag- this paper. ging), ada boost (ada), neural nets (nnet) and RecordLinkage is available from our project support vector machines (svm). The generation home page on R-Forge1 as well as from CRAN. of record pairs and comparison patterns from single data items are provided as well. Com- parison patterns can be chosen to be binary or based on some string metrics. In order to reduce Data preprocessing computation time and memory usage, blocking First steps in data preprocessing usually include can be used. Future development will concen- standardization of data, for example conversion of trate on additional and reﬁned methods, perfor- accented characters or enforcing a well-deﬁned date mance improvements and input/output facili- format. However, such methods are not included in ties needed for real-world application. RecordLinkage. In the package, the user is respon- sible for providing the data in the form the package expects it.Introduction Data must reside in a data frame where each row holds one record and columns represent at-When dealing with data from different sources that tributes. The package includes two example datastem from and represent one realm, it is likely that sets, RLdata500 and RLdata10000, which differ in thehomonym and especially synonym errors occur. In number of records.2 The following example showsorder to reduce these errors either different data ﬁles the structure of RLdata500.are linked or one data ﬁle is deduplicated. Recordlinkage is the task of reducing such errors through > library(RecordLinkage)a vast number of different methods. These methods > data(RLdata500)can be divided into two classes. One class consists of > RLdata500[1:5, ]stochastic methods based on the framework of Fel-legi and Sunter (1969). The other class comprises fname_c1 fname_c2 lname_c1 lname_c2 by bm bdnon-stochastic methods from the machine learning 1 CARSTEN <NA> MEIER <NA> 1949 7 22context. Methods from both classes need preliminary 2 GERD <NA> BAUER <NA> 1968 7 27steps in order to generate data pairs from single data 3 ROBERT <NA> HARTMANN <NA> 1930 4 30items. These record pairs are then transformed into 4 STEFAN <NA> WOLFF <NA> 1957 9 2comparison patterns. An exemplary comparison pat- 5 RALF <NA> KRUEGER <NA> 1966 1 13tern is of the form γ = (1, 0, 1, 0, 1, 0, 0, 0) where onlyagreement and non-agreement of eight attributes The ﬁelds in this data set are ﬁrst name and fam-are evaluated. This transformation and other pre- ily name, each split into a ﬁrst and second compo-processing steps are described in the next section. nent, and the date of birth, with separate componentsStochastic record linkage is primarily deﬁned by the for day, month and year.assumption of a probability model concerning prob- Column names are reused for comparison pat-abilities of agreement of attributes conditional on the terns (see below). If a record identiﬁer other thanmatching status of the underlying data pair. Machine the row number is desired, it should be included aslearning methods reduce the problem of record link- a separate column instead of using row.names. 1 http://r-forge.r-project.org/projects/recordlinkage/ 2 The data were created randomly from German name statistics and have no relation to existing persons.The R Journal Vol. 2/2, December 2010 ISSN 2073-4859
62.
62 C ONTRIBUTED R ESEARCH A RTICLESBuilding comparison patterns a subset of attributes, called blocking ﬁelds. A block- ing speciﬁcation can be supplied to the compare func-We include two functions for the creation of compari- tions via the argument blockfld. The most simpleson patterns from data sets: compare.dedup for dedu- speciﬁcation is a vector of column indices denotingplication of a single data set and compare.linkage for the attributes on which two records must agree (pos-linking two data sets together. In the case of three sibly after applying a phonetic code, see below) toor more data sets, iteratively two of them are linked appear in the output. Combining several such spec-and replaced by the data set which is the result of the iﬁcations in a list leads to the union of the sets ob-linkage. This leads to n − 1 linkages for n data sets. tained by the individual application of the speciﬁca- Both compare functions return an object of class tions. In the following example, two records must"RecLinkData" which includes, among other compo- agree in either the ﬁrst component of the ﬁrst namenents, the resulting comparison patterns as compo- or the complete date of birth to appear in the result-nent pairs. In the following, such an object will be ing set of comparison patterns.referred to as a data object . > rpairs <- compare.dedup(RLdata500,> rpairs <- compare.dedup(RLdata500, + blockfld = list(1, 5:7),+ identity = identity.RLdata500) + identity = identity.RLdata500)> rpairs$pairs[1:5, ] > rpairs$pairs[c(1:3, 1203:1204), ] id1 id2 fname_c1 fname_c2 lname_c1 lname_c2 by id1 id2 fname_c1 fname_c2 lname_c1 lname_c21 1 2 0 NA 0 NA 0 1 17 119 1 NA 0 NA2 1 3 0 NA 0 NA 0 2 61 106 1 NA 0 NA3 1 4 0 NA 0 NA 0 3 61 175 1 NA 0 NA4 1 5 0 NA 0 NA 0 1203 37 72 0 NA 0 NA5 1 6 0 NA 0 NA 0 1204 44 339 0 NA 0 NA bm bd is_match by bm bd is_match1 1 0 0 1 0 0 0 02 0 0 0 2 0 0 1 03 0 0 0 3 0 0 1 04 0 0 0 1203 1 1 1 15 1 0 0 1204 1 1 1 0 The printed slice of datapairs$pairs shows thestructure of comparison patterns: The row numbers Phonetic functions and string comparatorsof the underlying records are followed by their cor-responding comparison vector. Note that a missing Phonetic functions and string comparators are sim-value in either of two records results in a NA in the ilar, yet distinct approaches to dealing with ty-corresponding column of the comparison pattern. pographical errors in character strings. A pho-The classiﬁcation procedures described in the follow- netic function maps words in a natural language toing sections treat these NAs as zeros by default. If strings representing their pronunciation (the pho-this behaviour is undesired, further preprocessing by netic code). The aim is that words which soundthe user is necessary. Column is_match denotes the similar enough get the same phonetic code. Obvi-true matching status of a pair (0 means non-match, ously one needs different phonetic functions for dif-1 means match). It can be set externally by using ferent languages.4 Package RecordLinkage includesthe optional argument identity of the compare func- the popular Soundex algorithm for English and ations.3 This allows evaluation of record linkage pro- German language algorithm introduced by Michaelcedures based on labeled data as a gold standard. (1999), implemented through functions soundex and pho_h respectively. The argument phonetic of the compare functions controls the application of theBlocking phonetic function, which defaults to pho_h and can be set by argument phonfun. Typically, an integerBlocking is the reduction of the amount of data pairs vector is supplied which speciﬁes the indices of thethrough focusing on speciﬁed agreement patterns. data columns for which a phonetic code is to be com- Unrestricted comparison yields comparison pat- puted before comparison with other records. Noteterns for all possible data pairs: n(n − 1)/2 for dedu- that the phonetic function, if requested, is appliedplication of n records, n · m for linking two data sets before the blocking process, therefore the equalitywith n and m records. Blocking is a common strategy restrictions imposed by blocking apply to phoneticto reduce computation time and memory consump- codes. Consider, for example, a call with argumentstion by only comparing records with equal values for phonetic = 1:4 and blockfld = 1. In this case, the 3 See documentation for compare.* for details. 4 For this reason, problems may arise when records in one ﬁle stem from individuals from different nationalities.The R Journal Vol. 2/2, December 2010 ISSN 2073-4859
63.
C ONTRIBUTED R ESEARCH A RTICLES 63actual blocking criterion is agreement on phonetic There are several ways of estimating the proba-code of the ﬁrst attribute. bilities involved in this model. In RecordLinkage an String comparators measure the similarity be- EM algorithm is used as a promising method for re-tween strings, usually with a similarity measure in liable estimations. The backbone of this algorithmthe range [0, 1], where 0 denotes maximal dissimilar- is described by Haber (1984). We extended and im-ity and 1 equality. This allows ‘fuzzy’ comparison plemented this algorithm in C in order to improvepatterns as displayed in the following example.5 its performance. Without a proper stochastic model, the probabilities have to be ﬁxed manually. If only> rpairsfuzzy <- compare.dedup(RLdata500, weights are to be computed without relying on the+ blockfld = c(5, 6), strcmp = TRUE) assumptions of probabilities, then simple methods> rpairsfuzzy$pairs[1:5, ] like the one implemented by Contiero et al. (2005) id1 id2 fname_c1 fname_c2 lname_c1 lname_c2 are suitable. Further details of these methods are also1 357 414 1.0000000 NA 1.0000000 NA found in Sariyar et al. (2009).2 389 449 0.6428571 NA 0.0000000 NA Weight calculation based on the EM algorithm3 103 211 0.7833333 NA 0.5333333 NA and the method by Contiero et al. (2005) are im-4 6 328 0.4365079 NA 0.4444444 NA plemented by functions emWeights and epiWeights.5 37 72 0.9750000 NA 0.9500000 NA Both take a data set object as argument and return a by bm bd is_match copy with the calculated weights stored in additional1 1 1 0.7000000 NA components. Calling summary on the result shows the2 1 1 0.6666667 NA distribution of weights in histogram style. This infor-3 1 1 0.0000000 NA4 1 1 0.0000000 NA mation can be helpful for determining classiﬁcation5 1 1 1.0000000 NA thresholds, e.g. by identifying clusters of record pairs with high or low weights as non-matches or matches Controlling the application of string comparators respectively.works in the same manner as for phonetic functions,via the arguments strcmp and strcmpfun. The al- > rpairs <- epiWeights(rpairs)gorithms by Winkler (1990) (function jarowinkler) > summary(rpairs)and one based on the edit distance by Leven- Deduplication Data Setshtein (function levenshteinSim) are included inthe package. String comparison and phonetic en- 500 recordscoding cannot be used simultaneously on one at- 1221 record pairstribute but can be applied to different attributeswithin one set of comparison patterns, as in the 49 matchesfunction call compare.dedup(RLdata500, phonetic 1172 non-matches= 1:4, strcmp = 5:7). We refer to the reference 0 pairs with unknown statusmanual for further information. Weight distribution:Stochastic record linkage [0.15,0.2] (0.2,0.25] (0.25,0.3] (0.3,0.35] 1011 0 89 30Theory (0.35,0.4] (0.4,0.45] (0.45,0.5] (0.5,0.55]Stochastic record linkage relies on the assumption of 29 8 7 1conditional probabilities concerning comparison pat- (0.55,0.6] (0.6,0.65] (0.65,0.7] (0.7,0.75] 14 19 10 2terns. The probabilities of the random vector γ = (0.75,0.8] ˜ ˜ ˜(γ1 , ..., γn ) having value γ = (γ1 , ..., γn ) conditional 1on the match status Z are deﬁned by Discernment between matches and non-matches u γ = P ( γ = γ | Z = 0), m γ = P ( γ = γ | Z = 1), ˜ ˜ ˜ ˜ is achieved by means of computing weight thresh-where Z = 0 stands for a non-match and Z = 1 for a olds. In the Fellegi-Sunter model, thresholds arematch. In the Fellegi-Sunter model these probabili- computed via speciﬁcation of destined (and feasi-ties are used to compute weights of the form ble) values of homonym and synonym errors so that the amount of doubtable cases is minimized. In P ( γ = γ | Z = 1) ˜ the package three auspicious variants for determin- wγ = log ˜ . ing the threshold are implemented. The most com- P ( γ = γ | Z = 0) ˜ mon practice is to determine thresholds by cleri-These weights are used in order to discern between cal review, either a single threshold which separatesmatches and non-matches. links and non-links or separate thresholds for links 5 Blocking is used in this example for the purpose of reducing computation time.The R Journal Vol. 2/2, December 2010 ISSN 2073-4859
64.
64 C ONTRIBUTED R ESEARCH A RTICLESand non-links which deﬁne a range of doubtable After weights have been calculated for these data,cases between them. RecordLinkage supports this the classiﬁcation threshold can be obtained by call-by the function getPairs, which shows record pairs ing optimalThreshold on the training data object.aligned in two consecutive lines along with their The other alternative for determining thresholdsweight (see the following example). When appropri- is an unsupervised procedure based on concepts ofate thresholds are found, classiﬁcation is performed extreme value statistics. A mean excess plot is gener-with emClassify or epiClassify, which take as argu- ated on which the interval representing the relevantments the data set object and one or two classiﬁcation area for false match rates is to be determined. Basedthresholds. on the assumption that this interval corresponds to a> tail(getPairs(rpairs, 0.6, 0.5)) fat tail of the empirical weights distribution, the gen- eralized Pareto distribution is used to compute the Weight id fname_c1 fname_c2 lname_c1 threshold discerning matches and non-matches. De-25 0.5924569 266 KARIN <NA> HORN tails of this latter procedure will be found in a forth-26 437 KARINW <NA> HORN coming paper which is still under review.27 0.5924569 395 GISOELA <NA> BECK A function getParetoThreshold is included in28 404 GISELA <NA> BECK29 0.5067013 388 ANDREA <NA> WEBER the package which encapsulates the necessary steps.30 408 ANDREA <NA> SCHMIDT Called on a data set for which weights have been lname_c2 by bm bd calculated, it brings up a mean excess plot of the25 <NA> 2002 6 4 weights. By clicking on the graph, the boundaries of26 <NA> 2002 6 4 the weight range in which matches and non-matches27 <NA> 2003 4 16 presumably overlap are selected. This area is usu-28 <NA> 2003 4 16 ally discernible as a relatively long, approximately29 <NA> 1945 5 20 linear section in the middle region of the mean ex-30 <NA> 1945 2 20 cess graph. This corresponds to the assumption that> result <- epiClassify(rpairs, 0.55) the generalized Pareto distribution is applicable. The data sets in the package provide only weak support The result is an object of class "RecLinkResult", for this assumption, especially because of their lim-which differs from the data object in having a com- ited size. Figure 1 shows an example plot where theponent prediction that represents the classiﬁcation appropriate limits are displayed as dashed lines. Theresult. Calling summary on such an object shows error return value is a threshold which can be used withmeasures and a table comparing true and predicted emClassify or epiClassify, depending on the typematching status.6 of weights. We refer to the package vignette Classify- ing record pairs by means of Extreme Value Theory for an> summary(result) example application.Deduplication Data Set[...] 3046 links detected0 possible links detected 251175 non-links detected 20alpha error: 0.061224beta error: 0.000000 MRL 15accuracy: 0.997543 10Classification table: 5 classificationtrue status N P L 0 FALSE 1172 0 0 TRUE 3 0 46 −60 −40 −20 0 20 Threshold One alternative to threshold determination byclerical review needs labeled training data on whichthe threshold for the whole data set is computed by Figure 1: Example mean excess plot of weights withminimizing the number of wrongly classiﬁed pairs. selected limits. 6 True status is denoted by TRUE and FALSE, classiﬁcation result by "N" (non-link),"L" (link) and "P" (possible link). To save space, someoutput is omitted, marked by ‘[...]’ in this and some of the following examplesThe R Journal Vol. 2/2, December 2010 ISSN 2073-4859
65.
C ONTRIBUTED R ESEARCH A RTICLES 65Machine learning methods former case, one can provide a distinct set of records which are compared by one of the compare functionsRecord Linkage can be understood as a classiﬁcation (see above) to obtain a training set. In the latter case,problem with comparison pattern γ as input and the two approaches exist, ﬁrst through what we call amatching status variable Z as output. With this view, minimal training set , second through unsuperviseda vast range of machine learning procedures, such classiﬁcation.as clustering methods, decision trees or support vec- To construct a minimal training set, compari-tor machines, becomes available for deduplicating or son patterns in a data set are grouped accordinglinking personal data. The following describes the to their conﬁguration of agreement values. For ev-application of machine learning to record linkage as ery present conﬁguration, one representative is ran-it is implemented in the package. domly chosen. Naturally, this procedure is only fea- sible for binary comparisons (agreement or disagree- ment coded as 1 and 0).Unsupervised classiﬁcation In our experience, the use of supervised classiﬁ-Unsupervised classiﬁcation methods are attractive as cation with a minimal training set can yield resultsthey eliminate the necessity to provide representa- similar to those that might be achieved with ran-tive training data, which can turn out to be a time- domly sampled training sets of a substantially largerconsuming task, involving either manual review or size. Their small magnitude allows minimal trainingthe ﬁnding of an adequate mechanism to generate sets to be classiﬁed by clerical review with manage-artiﬁcial data. Motivated by this advantage, unsu- able effort.pervised clustering is incorporated into the package Two functions in RecordLinkage facilitate the useby means of the classifyUnsup function, which uses of minimal training sets. A set with the deﬁned prop-k-means clustering via kmeans or bagged clustering erties can be assembled by calling getMinimalTrainvia bclust from package e1071 (Dimitriadou et al., on a data object. Calling editMatch on the result2009). It must be noted that the quality of the re- opens an edit window which prints each record pairsulting classiﬁcation varies signiﬁcantly for different on two consecutive lines and allows for setting itsdata sets and poor results can occur. The following matching status (as displayed in Figure 2). In the fol-example shows the invocation of k-means clustering lowing example, 17 comparison patterns are selectedand the resulting classiﬁcation. randomly as a minimal training set.> summary(classifyUnsup(rpairs, method = "kmeans")) > minTrain <- getMinimalTrain(rpairs) > minTrain <- editMatch(minTrain)Deduplication Data Set[...]62 links detected0 possible links detected1159 non-links detectedalpha error: 0.000000beta error: 0.011092accuracy: 0.989353Classification table: classificationtrue status N P L FALSE 1159 0 13 Figure 2: Edit window for clerical review TRUE 0 0 49 The second approach to obtaining training dataSupervised classiﬁcation when no labelled data are available, provided by function genSamples, uses unsupervised clusteringTraining data for calibrating supervised classiﬁcation in a manner similar to classifyUnsup. Instead of di-methods can be obtained in a variety of ways in the rectly classifying all the data, a subset is extractedcontext of this package. An important distinction is and classiﬁed by bagged clustering. It can then beto be made between cases where additional training used to calibrate a supervised classiﬁer which is ul-data with known true identity status are available timately applied to the remaining set. Arguments toand those where a training set has to be determined genSamples are the original data set, the number offrom the data on which linkage is performed. In the non-matches to appear in the training set and theThe R Journal Vol. 2/2, December 2010 ISSN 2073-4859
66.
66 C ONTRIBUTED R ESEARCH A RTICLESdesired ratio of matches to non-matches. For ex- classifySupv. The following example shows the ap-ample, the call genSamples(datapairs, num.non = plication of bagging based on the minimal training200, des.mprop = 0.1) tries to build a training set set minTrain from the example above.with 200 non-matches and 20 matches. The returnvalue is a list with two disjoint sets named train and > model <- trainSupv(minTrain, method = "bagging")valid. > result <- classifySupv(model, newdata = rpairs) In an scenario where different record linkage ap- > summary(result)proaches are evaluated, it is useful to split a dataset into training and validation sets. Such a split Deduplication Data Setcan be performed by splitData. The return value [...]is a list with components train and valid as in thecase of genSamples. The most basic usage is to spec- 53 links detectedify a fraction of patterns that is drawn randomly 0 possible links detectedas a training set using the argument prop. For ex- 1168 non-links detectedample, splitData(rpairs, prop = 0.1) selects onetenth of the data for training. By setting keep.mprop alpha error: 0.020408= TRUE it can be enforced that the original ratio of beta error: 0.004266matches to non-matches is retained in the resulting accuracy: 0.995086sets. Another possibility is to set the desired num-ber of non-matches and the match ratio through ar- Classification table:guments num.non and mprop as with genSamples(). classificationClassiﬁcation functions true status N P L FALSE 1167 0 5All classiﬁcation methods share two interface TRUE 1 0 48functions: trainSupv for calibrating a classiﬁer,classifySupv for classifying new data. At least twoarguments are required for trainSupv, the data set Discussionon which to train and a string representing the clas-siﬁcation method. Currently, the supported methods During its development, the functionalities of theare: package RecordLinkage have already been used by the authors for a period of about one year, individual"rpart" Recursive partitioning trees, provided by functions even before that. Thanks to this process of package rpart (Therneau et al., 2009). parallel development and usage, bugs were detected early and new requirements that became apparent"bagging" Bagging of decision trees, provided by were accounted for by implementing new features. package ipred (Peters and Hothorn, 2009). Therefore, the package is generally in a usable state. However, there is still potential for future improve-"ada" Stochastic boosting, provided by package ada ments, mainly regarding data handling and perfor- (Culp et al., 2006). mance."svm" Support vector machines, provided by pack- RecordLinkage was developed mainly as a tool age e1071 (Dimitriadou et al., 2009). for empirical evaluation of record linkage methods, i.e. the main focus of our research was on the perfor-"nnet" Single-hidden-layer neural networks, pro- mance of particular record linkage methods. Only vided by package e1071 (ibid.). in one case, the evaluation of a deterministic record linkage procedure used in a German cancer registry, Of the further arguments, use.pred is notewor- were we actually interested in detecting duplicates.thy. Setting it to TRUE causes trainSupv to treat the Development of the package followed the needsresult of a previous prediction as outcome variable. of this detection process. But many other desir-This has to be used if the training data stem from a able improvements concerning data processing andrun of genSamples. Another application is to obtain a input/ouput facilities are needed for a real-worldtraining set by classifying a fraction of the data set by record linkage process, such as:the process of weight calculation and manual settingof thresholds. In order to train a supervised classiﬁer • The possibility to use a database connection forwith this training set, one would have to set use.pred data input and output.= TRUE. The return value is an object of class • Improved print facilities for clerical review,"RecLinkClassif". Classiﬁcation of new data is car- such as highlighting of agreement or disagree-ried out by passing it and a "RecLinkData" object to ment in record pairs.The R Journal Vol. 2/2, December 2010 ISSN 2073-4859
67.
C ONTRIBUTED R ESEARCH A RTICLES 67 • The possibility to output a deduplicated data //CRAN.R-project.org/package=e1071. R pack- set based on the matching result. This is not age version 1.5-19. a trivial task as it requires choosing the most likely record from each group of matching I. Fellegi and A. Sunter. A theory for record link- items which can be accomplished by means of age. Journal of the American Statistical Association, linear programming. 64:1183–1210, 1969. Another important future task is performance en- E. Haber. AS207: Fitting a general log-linear model.hancement. When handling large data sets of about Applied Statistics, 33:358–362, 1984.106 or more record pairs, high memory consumptionoften leads to errors or destabilizes the computer sys- J. Michael. Doppelgänger gesucht – ein Programmtem R is run on. We are currently working on code für kontextsensitive phonetische Textumwand-changes to avoid unnecessary copying of objects and lung. c’t, 17(25):252–261, 1999. URL http://www.plan to incorporate these in future releases. Another heise.de/ct/ftp/99/25/252/.possibility, which needs further evaluation, is to use A. Peters and T. Hothorn. ipred: Improved Predictors,more sophisticated ways of storing data, such as a 2009. URL http://CRAN.R-project.org/package=database or one of the various R packages for large ipred. R package version 0.8-7.data sets. The current disadvantage of R concerningperformance and memory can be compensated by an M. Sariyar, A. Borg, and K. Pommerening. Evalua-elaborated blocking strategy. tion of record linkage methods for iterative inser- As a general improvement, it is intended to sup- tions. Methods Inf Med., 48(5):429–437, 2009.port the usage of algorithms not considered in thepackage through a generic interface for supervised T. M. Therneau, B. Atkinson, and B. Ripley (R port).or unsupervised classiﬁcation. rpart: Recursive Partitioning, 2009. URL http:// We are aware that RecordLinkage lacks an im- CRAN.R-project.org/package=rpart. R packageportant subtask of record linkage, the standardiza- version 3.1-45.tion of records. However, this is an area that itselfwould need extensive research efforts. Moreover, the W. E. Winkler. String comparator metrics andappropriate procedures depend heavily on the data enhanced decision rules in the Fellegi-Sunterto be linked. It is therefore left to the users to tai- model of record linkage. In Proceedings of thelor standardization methods ﬁtting the requirements Section on Survey Research Methods, Americanof their data. We refer to R’s capabilities to handle Statistical Association, pages 354–369, 1990. URLregular expressions and to the CRAN task view Nat- www.amstat.org/sections/srms/proceedings/uralLanguageProcessing7 . papers/1990_056.pdf.Bibliography Murat Sariyar Institute of Medical Biostatistics, Epidemiology and Infor-P. Contiero et al. The Epilink record linkage software. matics Methods Inf Med., 44(1):66–71, 2005. Mainz GermanyM. Culp, K. Johnson, and G. Michailidis. ada: sariyar@imbei.uni-mainz.de Performs boosting algorithms for a binary response, 2006. URL http://www.stat.lsa.umich.edu/ Andreas Borg ~culpm/math/ada/img.html. R package version Institute of Medical Biostatistics, Epidemiology and Infor- 2.0-1. maticsE. Dimitriadou, K. Hornik, F. Leisch, D. Meyer, and Mainz A. Weingessel. e1071: Misc Functions of the Depart- Germany ment of Statistics (e1071), TU Wien, 2009. URL http: borg@imbei.uni-mainz.de 7 http://cran.r-project.org/view=NaturalLanguageProcessingThe R Journal Vol. 2/2, December 2010 ISSN 2073-4859
68.
68 C ONTRIBUTED R ESEARCH A RTICLESspikeslab: Prediction and VariableSelection Using Spike and SlabRegressionby Hemant Ishwaran, Udaya B. Kogalur and J. Sunil Rao of the regression coefﬁcients from a rescaled spike and slab model is a type of WGRR estimator, they Abstract Weighted generalized ridge regres- showed that this WGRR estimator, referred to as the sion offers unique advantages in correlated high- Bayesian model averaged (BMA) estimator, when dimensional problems. Such estimators can be coupled with dimension reduction, yielded low test- efﬁciently computed using Bayesian spike and set mean-squared-error when compared to the elastic slab models and are effective for prediction. net (Zou and Hastie, 2005). For sparse variable selection, a generalization Additionally, Ishwaran and Rao (2010) intro- of the elastic net can be used in tandem with duced a generalization of the elastic net, which they these Bayesian estimates. In this article, we de- coined the gnet (short for generalized elastic net). scribe the R-software package spikeslab for im- The gnet is the solution to a least-squares penaliza- plementing this new spike and slab prediction tion problem in which the penalization involves an and variable selection methodology. overall 1 -regularization parameter (used to impose sparsity) and a unique 2 -regularization parameterThe expression spike and slab , originally coined for each variable (these latter parameters being in-by Mitchell and Beauchamp (1988), refers to a type troduced to combat multicollinearity). To calculateof prior used for the regression coefﬁcients in lin- the gnet, a lasso-type optimization is used by ﬁx-ear regression models (see also Lempers (1971)). ing the 2 -regularization parameters at values de-In Mitchell and Beauchamp (1988), this prior as- termined by ﬁnding the closest GRR to the BMA.sumed that the regression coefﬁcients were mutu- Like the BMA, the gnet is highly effective for predic-ally independent with a two-point mixture distribu- tion. However, unlike the BMA, which is obtained bytion made up of a uniform ﬂat distribution (the slab) model averaging, and therefore often contains manyand a degenerate distribution at zero (the spike). small coefﬁcient values, the gnet is much sparser,In George and McCulloch (1993) a different prior for making it more attractive for variable selection.the regression coefﬁcient was used. This involved ascale (variance) mixture of two normal distributions. The gnet and BMA estimators represent attractiveIn particular, the use of a normal prior was instru- solutions for modern day high-dimensional data set-mental in facilitating efﬁcient Gibbs sampling of the tings. These problems often involve correlated vari-posterior. This made spike and slab variable selec- ables, in part due to the nature of the data, and in parttion computationally attractive and heavily popular- due to an artifact of the dimensionality [see Cai andized the method. Lv (2007); Fan and Lv (2008) for a detailed discus- As pointed out in Ishwaran and Rao (2005), sion about high-dimensional correlation]. The BMAnormal-scale mixture priors, such as those used is attractive because it addresses the issue of corre-in George and McCulloch (1993), constitute a wide lation by drawing upon the properties of WGRR es-class of models termed spike and slab models. Spike timation, a strength of the Bayesian approach, whileand slab models were extended to the class of the gnet achieves sparse variable selection by draw-rescaled spike and slab models (Ishwaran and Rao, ing upon the principle of soft-thresholding, a power-2005). Rescaling was shown to induce a non- ful frequentist regularization concept.vanishing penalization effect, and when used in tan- Because high-dimensional data is becoming in-dem with a continuous bimodal prior, confers useful creasingly common, it would be valuable to havemodel selection properties for the posterior mean of user friendly software for computing the gnet andthe regression coefﬁcients (Ishwaran and Rao, 2005, BMA estimator. With this in mind, we have devel-2010). oped an R package spikeslab for implementing this Recently, Ishwaran and Rao (2010) considered the methodology (Ishwaran, Kogalur and Rao, 2010).geometry of generalized ridge regression (GRR), a The main purpose of this article is to describe thismethod introduced by Hoerl and Kennard to over- package. Because this new spike and slab approachcome ill-conditioned regression settings (Hoerl and may be unfamiliar to users in the R-community, weKennard, 1970a,b). This analysis showed that GRR start by giving a brief high-level description of the al-possesses unique advantages in high-dimensional gorithm [for further details readers should howevercorrelated settings, and that weighted GRR (WGRR) consult Ishwaran and Rao (2010)]. We then highlightregression, a generalization of GRR, is potentially some of the package’s key features, illustrating itseven more effective. Noting that the posterior mean use in both low- and high-dimensional settings.The R Journal Vol. 2/2, December 2010 ISSN 2073-4859
69.
C ONTRIBUTED R ESEARCH A RTICLES 69The spike and slab algorithm 2005) where cross-validation is typically used to de- termine its regularization parameters (often this in-The spikeslab R package implements the rescaled volves a double optimization over both the 1 - andspike and slab algorithm described in Ishwaran and 2 -regularization parameters). This is an importantRao (2010). This algorithm involves three key steps: feature which reduces computational times in big-p problems. 1. Filtering (dimension reduction). 2. Model Averaging (BMA). 3. Variable Selection (gnet). Low-dimensional settings Although the spike and slab algorithm is especially Step 1 ﬁlters all but the top nF variables, where adept in high-dimensional settings, it can be used ef-n is the sample size and F > 0 is the user speciﬁed fectively in classical settings as well. In these low-fraction. Variables are ordered on the basis of their dimensional scenarios when p < n, the algorithm isabsolute posterior mean coefﬁcient value, where the implemented by applying only Steps 2 and 3 (i.e.,posterior mean is calculated using Gibbs sampling Step 1 is skipped). The default setting for spikeslab,applied to an approximate rescaled spike and slab in fact, assumes a low-dimensional scenario.posterior. Below is a toy-illustration of how ﬁlter- As illustration, we consider the benchmark dia-ing works [p is the total number of variables and p betes data (n = 442, p = 64) used in Efron et al. (2004)(V(k) )k=1 are the ordered variables]: and which is an example dataset included in the V(1) , . . . , V([nF]) V([nF]+1) , . . . , V( p) . package. The response Y is a quantitative measure of disease progression for patients with diabetes. The retain these variables ﬁlter these variables data includes 10 baseline measurements for each pa- tient, in addition to 45 interactions and 9 quadraticThe value for F is set using the option terms, for a total of 64 variables for each patient. Thebigp.smalln.factor, which by default is set to the following code implements a default analysis:value F = 1. The use of an approximate posterior inthe ﬁltering step is needed in high dimensions. This data(diabetesI, package = "spikeslab")yields an ultra-fast Gibbs sampling procedure, with set.seed(103608)each step of the Gibbs sampler requiring O(np) op- obj <- spikeslab(Y ~ . , diabetesI)erations. Thus, computational effort for the ﬁltering print(obj)step is linear in both dimensions. Step 2 ﬁts a rescaled spike and slab model using The print call outputs a basic summary of the analy-only those variables that are not ﬁltered in Step 1. sis, including a list of the selected variables and theirModel ﬁtting is implemented using a Gibbs sampler. parameter estimates (variables selected are thoseComputational times are generally rapid as the num- having nonzero gnet coefﬁcient estimates):ber of variables at this point are a fraction of the orig-inal size, p. A blocking technique is used to further bma gnet bma.scale gnet.scalereduce computational times. The posterior mean of bmi 24.076 23.959 506.163 503.700the regression coefﬁcients, which we refer to as the ltg 23.004 22.592 483.641 474.965BMA, is calculated and returned. This (restricted) map 14.235 12.894 299.279 271.089BMA is used as an estimator for the regression co- hdl -11.495 -10.003 -241.660 -210.306efﬁcients. sex -7.789 -6.731 -163.761 -141.520 Step 3 calculates the gnet. In the optimization, age.sex 6.523 5.913 137.143 124.322the gnet’s 2 -regularization parameters are ﬁxed bmi.map 3.363 4.359 70.694 91.640(these being determined from the restricted BMA ob- glu.2 2.185 3.598 45.938 75.654tained in Step 2) and its solution path with respect age.ltg 1.254 0.976 26.354 20.528to its 1 -regularization parameter is calculated using bmi.2 1.225 1.837 25.754 38.622the lars R package (Hastie and Efron, 2007) [a pack- age.map 0.586 0.928 12.322 19.515age dependency of spikeslab]. The lars wrapper is age.2 0.553 0.572 11.635 12.016called with type=”lar” to produce the full LAR path sex.map 0.540 0.254 11.349 5.344solution (Efron et al., 2004). The gnet is deﬁned as glu 0.522 0.628 10.982 13.195the model in this path solution minimizing the AIC age.glu 0.417 0.222 8.757 4.677criterion. Note importantly, that cross-validation is In interpreting the table, we note the following:not used to optimize the 1 -regularization parameter.The gnet estimator is generally very stable, a prop- (i) The ﬁrst column with the heading bma lists theerty that it inherits from the BMA, and thus even coefﬁcient estimates for the BMA estimator ob-simple model selection methods such as AIC work tained from Step 2 of the algorithm. These val-quite well in optimizing its path solution. This is ues are given in terms of the standardized co-different than say the elastic net (Zou and Hastie, variates (mean of 0 and variance of 1).The R Journal Vol. 2/2, December 2010 ISSN 2073-4859
70.
70 C ONTRIBUTED R ESEARCH A RTICLES (ii) The second column with the heading gnet lists the coefﬁcient estimates for gnet obtained from Step 3 of the algorithm. These values are also 8000 given in terms of the standardized covariates. Cross−Validated MSE (iii) The last two columns are the BMA and gnet 6000 estimators given in terms of the original scale of the variables. These columns are used for prediction, while the ﬁrst two columns are use- ful for assessing the relative importance of vari- 4000 ables.Note that all columns can be extracted from the spike 2000and slab object, obj, if desired.Stability analysis 0 10 20 30 40 50 60 Model SizeEven though the gnet accomplishes the goal of vari- 90 100able selection, it is always useful to have a measure q q qqof stability of a variable. The wrapper cv.spikeslab 4 3 21 q q q 76 5can be used for this purpose. q q 8 9 The call to this wrapper is very simple. Here we q 60 70 80 10illustrate its usage on the diabetes data: q stability (%)y <- diabetesI[, 1] q qq 1 bmi q q 2 ltgx <- diabetesI[, -1] 3 mapcv.obj <- cv.spikeslab(x = x, y = y, K = 20) 4 hdl q 5 age.sex 6 bmi.map 50This implements 20-fold validation (the number of q 7 glu.2 q 8 sexfolds is set by using the option K). The gnet estima- 9 age.map 40tor is ﬁt using the training data and its test-set mean- q 10 age.glu qsquared-error (MSE) for its entire solution-path is de- 30 qtermined. As well, for each fold, the optimal gnet −10 −5 q 0 5 10 15 20 25model is determined by minimizing test-set error. gnetThe average number of times a variable is selectedin this manner deﬁnes its stability (this is recorded in Figure 1: Stability analysis for diabetes data.percentage as a value from 0%-100%). Averaging thegnet’s test-set MSE provides an estimate of its MSEas a function of the number of variables. High-dimensional settings The gnet’s coefﬁcient values (estimated using thefull data) and its stability values can be obtained To analyze p n data, users should use the optionfrom the cv.obj using the following commands: bigp.smalln=TRUE in the call to spikeslab. This will invoke the full spike and slab algorithm includingcv.stb <- as.data.frame(cv.obj$stability) the ﬁltering step (Step 1) which is crucial to suc-gnet <- cv.stb$gnet cess in high-dimensional settings (note that p ≥ n forstability <- cv.stb$stability this option to take effect). This three-step algorithm is computationally efﬁcient, and because the bulkFigure 1 (top) plots the gnet’s cross-validated MSE of the computations are linear in p, the algorithmcurve as a function of the model size. The plot was should scale effectively to very large p-problems.produced with the command However, in order to take full advantage of its speed,plot(cv.obj, plot.type = "cv") there are a few simple, but important rules to keep in mind.Close inspection (conﬁrmed by considering the ob- First, users should avoid using the formula andject, cv.obj) shows that the optimal model size data-frame call to spikeslab when p is large. In-is somewhere between 9 and 17, agreeing closely stead they should pass the x-covariate matrix and y-with our previous analysis. The bottom plot shows response vector directly. This avoids the tremendoushow gnet coefﬁcient estimates vary in terms of their overhead required to parse formula in R.stability values (obtained by plotting gnet versus Second, the ﬁnal model size of the BMA and gnetstability). There are 10 variables having stability are controlled by two key options; these must be setvalues greater than 80%. properly to keep computations manageable. TheseThe R Journal Vol. 2/2, December 2010 ISSN 2073-4859
71.
C ONTRIBUTED R ESEARCH A RTICLES 71options are: bigp.smalln.factor and max.var. The being:ﬁrst option restricts the number of ﬁltered variablesin Step 1 of the algorithm to be no larger than obj <- spikeslab(x=x, y=y, bigp.smalln = TRUE)Fn, where F > 0 is the value bigp.smalln.factor. plot(obj, plot.type = "path")The default setting F = 1 should be adequate in Actually Figure 2 was not produced by a call to plotmost scenarios [one exception is when n is very but in fact was obtained by slightly modifying thelarge (but smaller than p); then F should be de- plot.lars wrapper so as to display the paths of acreased to some value 0 < F < 1]. The second op- variable color coded by its true coefﬁcient value (bluetion, max.var, restricts the number of selected vari- for truly zero and red for truly nonzero). We did thisables in both Steps 1 and 3 of the algorithm. Its in order to facilitate comparison to the lasso. Thefunction is similar to bigp.smalln.factor, although lasso path (obtained using the LAR-solution) is dis-unlike bigp.smalln.factor, it directly controls the played in the bottom plot of Figure 2. Notice howsize of gnet. The default value is max.var=500. in contrast to gnet, the path solution for the lassoIn most examples, it will sufﬁce to work with has a wiggly "spaghetti"-like shape and that many ofbigp.smalln.factor. the truly nonzero coefﬁcients are hard to identify be- Thus, if x is the x-matrix, y is the y-response cause of this. This a direct consequence of the high-vector, and f and m are the desired settings for correlation in the x-variables of this example. Thisbigp.smalln.factor and max.var, then a generic call correlation creates instability in the lasso-LAR solu-in high-dimensional settings would look like: tion, and this ultimately impacts its performance.obj <- spikeslab(x=x, y=y, bigp.smalln = TRUE, bigp.small.n.factor = f, max.var = m) 20 1Although spikeslab has several other options, mostusers will not need these and the above call should Standardized Coefficientssufﬁce for most examples. However, if computa- 10 3tional times are a still of concern even after tuningf and m, users may consider changing the default 19 54 27values of n.iter1 and n.iter2. The ﬁrst controls the 0number of burn-in iterations used by the Gibbs sam-pler, and the second controls the number of Gibbs −10sampled values following burn-in (these latter val- 2ues are used for inference and parameter estimation). −20The default setting is 500 in both cases. Decreas-ing these values will decrease computational times,but accuracy will suffer. Note that if computational 0.0 0.2 0.4 0.6 0.8 1.0times are not a concern, then both values could be |beta|/max|beta|increased to 1000 (but not much more is needed) toimprove accuracy. 1495 As illustration, we used a simulation with n = 20100 and p = 2000. The data was simulated inde- 1262pendently in blocks of size 40. Within each block, Standardized Coefficientsthe x-variables were drawn from a 50-dimensional 10 123multivariate normal distribution with mean zero andequicorrelation matrix with ρ = 0.95. With probabil- 29 197ity 0.9, all regression coefﬁcients within a block were 0set to zero, otherwise with probability 0.1, all regres-sion coefﬁcients were set to zero except for the ﬁrst 10 −10coefﬁcients, which were each assigned a randomly 1931chosen value from a standard normal distribution. −20Random noise ε was simulated independently from 1391a N(0, σ2 ) distribution with σ = 0.4. The top plot in Figure 2 displays the path solution 0.0 0.2 0.4 0.6 0.8 1.0for the gnet. Such a plot can be produced by a call to |beta|/max|beta|the lars wrapper plot.lars using the gnet.obj ob-tained from the spikeslab call. As gnet.obj is a lars- Figure 2: Path solutions for the gnet (top) and thetype object it is fully interpretable by the lars pack- lasso (bottom) from a correlated high-dimensionalage, and thus it can be parsed by the packages’ var- simulation (n = 100 and p = 2000). Blue and redious wrappers. For convenience, the path solution lines correspond to truly zero and truly nonzero co-can be produced by a direct call to plot; a typical call efﬁcient values, respectively.The R Journal Vol. 2/2, December 2010 ISSN 2073-4859
72.
72 C ONTRIBUTED R ESEARCH A RTICLESSummary map 4.7111022 4.83179363 95 hdl -2.5520177 -2.66839785 95The gnet incorporates the strength of Bayesian bmi.2 3.6204750 0.46308305 40WGRR estimation with that of frequentist soft x.61746 0.0000000 -0.74646210 35thresholding. These combined strengths make it an x.42036 4.8342736 0.41993669 30effective tool for prediction and variable selection in x.99041 0.0000000 -0.70183515 30correlated high-dimensional settings. If variable se- x.82308 5.2728011 0.75420320 25lection is not of concern, and the key issue is ac- glu 1.3105751 0.16714059 25curate prediction, than the BMA may be preferred. x.46903 0.0000000 -0.65188451 25Both the gnet and BMA can be computed using the x.57061 0.0000000 0.73203633 25spikeslab R package. This package is computation- x.99367 -2.7695621 -0.22110463 20ally efﬁcient, and scales effectively even to massively tch 0.2542299 0.14837708 20large p-problems. x.51837 0.0000000 -0.09707276 20 As one example of this scalability, we added100,000 noise variables to the diabetes data set Importantly, note that the top 4 variables haveand then made a call to cv.spikeslab with the greater than or equal to 95% stability (variables start-added options bigp.smalln = TRUE, max.var = 100 ing with “x.” are noise variables). It is also interest-and parallel = TRUE (as before we used K = 20 ing that 3 other non-noise variables, "bmi.2", "glu",fold validation). The parallel option invokes par- and "tch" were in the top 15 variables. In fact, whenallel processing that is implemented via the package we inspected the 100 variables that passed the ﬁlter-snow (Tierney et al., 2008) [note that sending in an ing step of the algorithm (applied to the full data),integer for the option parallel sets the number of we found that 10 were from the original 64 variables,socket clusters on the local machine on which the ses- and 6 were from the top 15 variables from our earliersion is being initiated; in our example we actually analysis. This demonstrates stability of the ﬁlteringused parallel = 8]. The snow package should be algorithm even in ultra-high dimensional problems.loaded prior to making the cv.spikeslab call. Finally, we remark that in illustrating the spikeslab package in this article, we focused pri- marily on the spikeslab wrapper, which is the main entry point to the package. Other available wrap- 500 pers include predict.spikeslab for prediction on test data, and sparsePC.spikeslab. The latter imple- 100 200 300 400 ments variable selection for multiclass gene expres- Standardized Coefficients 1 sion data (Ishwaran and Rao, 2010). In future work we plan to extend the rescaled spike and slab methodology to high-dimensional generalized linear models. At that time we will in- troduce a corresponding wrapper. 12 14 97 3 Acknowledgements 0 This work was partially funded by the National Sci- −100 ence Foundation grant DMS-0705037. We thank Vin- 0.0 0.2 0.4 0.6 0.8 1.0 cent Carey and two anonymous referees for helping |beta|/max|beta| to substantially improve this manuscript.Figure 3: Path solution for the gnet for diabetes datawith p = 100, 000 noise variables. Bibliography Figure 3 displays the gnet’s path solution (ob- T.T. Cai. and J. Lv. Discussion: The Dantzig selector:tained using the full data). While only 4 variables statistical estimation when p is much larger than n.have path-proﬁles that clearly stand out, impres- Ann. Statist., 35:2365, 2007.sively these variables are the top 4 from our previousanalysis. The gnet estimates (scaled to standardized B. Efron, T. Hastie, I. Johnstone and R. Tibshirani.covariates), its averaged cv-estimates, and the stabil- Least angle regression (with discussion). Ann.ity values for the top 15 variables were: Statist., 32:407–499, 2004. gnet gnet.cv stability J. Fan and J. Lv. Sure independence screening forltg 20.1865498 20.14414604 100 ultra-high dimensional feature space. J. Royalbmi 18.7600433 22.79892835 100 Statist. Soc. B, 70(5):849–911, 2008.The R Journal Vol. 2/2, December 2010 ISSN 2073-4859
73.
C ONTRIBUTED R ESEARCH A RTICLES 73E.I. George and R.E. McCulloch. Variable selection T.J. Mitchell and J.J. Beauchamp. Bayesian variable via Gibbs sampling. J. Amer. Stat. Assoc., 88:881– selection in linear regression. J. Amer. Stat. Assoc., 889, 1993. 83:1023–1036, 1988.T. Hastie and B. Efron. lars: Least Angle Regression, L. Tierney, A. J. Rossini, N. Li and H. Sevcikova snow: Lasso and Forward Stagewise, 2010. R package ver- Simple Network of Workstations, 2008. R package sion 0.9-7. version 0.3-3.A.E. Hoerl and R.W. Kennard. Ridge regression: Bi- H. Zou and T. Hastie. Regularization and variable ased estimation for nonorthogonal problems. Tech- selection via the elastic net. J. Royal Statist. Soc. B, nometrics, 12:55–67, 1970. 67(2):301–320, 2005.A.E. Hoerl and R.W. Kennard. Ridge regression: Ap- plications to nonorthogonal problems (Corr: V12 Hemant Ishwaran p723). Technometrics, 12:69–82, 1970. Dept. Quantitative Health SciencesH. Ishwaran and J.S. Rao. Spike and slab variable se- Cleveland Clinic lection: frequentist and Bayesian strategies. Ann. Cleveland, OH 44195 Statist., 33:730–773, 2005. hemant.ishwaran@gmail.comH. Ishwaran and J.S. Rao. Generalized ridge regres- Udaya B. Kogalur sion: geometry and computational solutions when Dept. Quantitative Health Sciences, Cleveland Clinic p is larger than n. Manuscript, 2010. Cleveland, OH 44195H. Ishwaran, U.B. Kogalur and J.S. Rao. spikeslab: kogalurshear@gmail.com Prediction and variable selection using spike and slab regression, 2010. R package version 1.1.2. J. Sunil Rao Dept. of Epidemiology and Public HealthF. B. Lempers. Posterior Probabilities of Alternative Lin- School of Medicine, University of Miami ear Models. Rotterdam University Press, Rotter- Miami, FL 33136 dam, 1971. rao.jsunil@gmail.comThe R Journal Vol. 2/2, December 2010 ISSN 2073-4859
74.
74 F ROM THE C OREWhat’s New?by Kurt Hornik and Duncan Murdoch been unable to identify a solution that would be ap- propriate in terms of convenience as well as the effort Abstract We discuss how news in R and add- required. on packages can be disseminated. R 2.10.0 has R add-on packages provide news-type informa- added facilities for computing on news in a com- tion in a variety of places and plain text formats, with mon plain text format. In R 2.12.0, the Rd format a considerable number of ‘NEWS’ ﬁles in the pack- has been further enhanced so that news can be age top-level source directory or its ‘inst’ subdirec- very conveniently recorded in Rd, allowing for tory, and in the spirit of a one- or two-layer variant both improved rendering and the development of the R news format. Whereas all these ﬁles can of new news services. conveniently be read by “users”, the lack of stan- dardization implies that reliable computation on theThe R development community has repeatedly news items is barely possible, even though this coulddiscussed how to best represent and dissemi- be very useful. For example, upgrading packagesnate the news in R and R add-on packages. could optionally display the news between the avail-The GNU Coding Standards (http://www.gnu.org/ able and installed versions of packages (similar toprep/standards/html_node/NEWS-File.html) say the apt-listchanges facility in Debian-based Linux systems). One could also develop repository-level In addition to its manual, the package services extracting and disseminating the news in should have a ﬁle named ‘NEWS’ which all packages in the repository (or suitable groups of contains a list of user-visible changes these, such as task views (Zeileis, 2005)) on a regular worth mentioning. In each new release, basis. In addition, we note that providing the news in add items to the front of the ﬁle and iden- plain text limits the usefulness in package web pages tify the version they pertain to. Don’t and for possible inclusion in package reference man- discard old items; leave them in the ﬁle uals (in PDF format). after the newer items. This way, a user R 2.10.0 added a function news() (in package upgrading from any previous version can utils) to possibly extract the news for R or add-on see what is new. packages into a simple (plain text) data base, and use If the ‘NEWS’ ﬁle gets very long, move this for simple queries. The data base is a charac- some of the older items into a ﬁle named ter frame with variables Version, Category, Date and ‘ONEWS’ and put a note at the end refer- Text, where the last contains the entry texts read, and ring the user to that ﬁle. the other variables may be NA if they were missing or could not be determined. These variables can beFor a very long time R itself used a three-layer (series, used in the queries. E.g., we build a data base of allversion, category) plain text ‘NEWS’ ﬁle recording in- currently available R news entries:dividual news items in itemize-type lists, using ‘o’ asitem tag and aligning items to a common column (as > db <- news()quite commonly done in open source projects). This has 2950 news entries, distributed according to R 2.4.0 added a function readNEWS() (eventually version as follows:moved to package tools) for reading the R ‘NEWS’ﬁle into a news hierarchy (a series list of version lists > with(db, table(Version))of category lists of news items). This makes it pos- Versionsible to compute on the news (e.g. changes are dis- 2.0.1 2.0.1 patched 2.1.0played daily in the RSS feeds available from http:// 56 27 223developer.r-project.org/RSSfeeds.html), and to 2.1.1 2.1.1 patched 2.10.0transform the news structure into sectioning markup 56 31 153(e.g. when creating an HTML version for reading via 2.10.1 2.10.1 patched 2.11.0the on-line help system). However, the items were 43 35 132still plain text: clearly, it would be useful to be able to 2.11.1 2.2.0 2.2.1hyperlink to resources provided by the on-line help 28 188 63system (Murdoch and Urbanek, 2009) or other URLs, 2.2.1 patched 2.3.0 2.3.1 18 250 38e.g., those providing information on bug reports. In 2.3.1 patched 2.4.0 2.4.1addition, it has always been quite a nuisance to add 28 225 60L TEX markup to the news items for the release notes A 2.4.1 patched 2.5.0 2.5.1in the R Journal (and its predecessor, R News). The R 15 208 50Core team had repeatedly discussed the possibilities 2.5.1 patched 2.6.0 2.6.1of moving to a richer markup system (and generat- 16 177 31ing plain text, HTML and L TEX from this), but had A 2.6.2 2.6.2 patched 2.7.0The R Journal Vol. 2/2, December 2010 ISSN 2073-4859
75.
F ROM THE C ORE 75 53 14 186 Clearly, package maintainers could check readabil- 2.7.1 2.7.2 2.7.2 patched ity for themselves and ﬁx incompatibilities with the 66 45 6 default format, or switch to it (or contribute read- 2.8.0 2.8.1 2.8.1 patched ers for their own formats). But ideally, maintainers 132 59 27 could employ a richer format allowing for more reli- 2.9.0 2.9.1 2.9.2 able computations and better processing. 124 56 26 2.9.2 patched In R 2.12.0, the new Rd format was enhanced 5 in several important ways. The subsection and newcommand macros were added, to allow for sub-(This shows “versions” such as ‘2.10.1 patched’, sections within sections, and user-deﬁned macros.which correspond to news items in the time inter- One can now convert R objects to fragments of Rdval between the last patchlevel release and the next code, and process these. And ﬁnally, the renderingminor or major releases of R (in the above, 2.10.1 (of text in particular) is more customizable. Givenand 2.11.0), respectively. These are not valid ver- the availability of subsection, one can now conve-sion numbers, and are treated as the corresponding niently map one- and two-layer hierarchies of itempatchlevel version in queries.) To extract all bug ﬁxes lists into sections (and subsections) of Rd itemizesince R 2.11.0 that include a PR number from a formal lists. Altogether, the new Rd format obviouslybug report, we can query the news db as follows: becomes a very attractive (some might claim, the canonical) format for maintaining R and package> items <- news(Version >= "2.11.0" & news information, in particular given that most R de-+ grepl("^BUG", Category) & velopers will be familiar with the Rd format and that+ grepl("PR#", Text), Rd can be rendered as text, HTML (for on-line help)+ db = db) and L TEX (for PDF manuals). AThis ﬁnds 22 such items, the ﬁrst one being R 2.12.0 itself has switched to the Rd format for recording its news. To see the effect, run> writeLines(strwrap(items[1, "Text"])) help.start(), click on the NEWS link and note that e.g. bug report numbers are now hyperlinked to theThe C function mkCharLenCE now no longerreads past len bytes (unlikely to be a respective pages in the bug tracking system. Theproblem except in user code). (PR#14246) conversion was aided by the (internal) utility func- tion news2Rd() in package tools which converted the Trying to extract the news for add-on packages legacy plain text ‘NEWS’ to Rd, but of course requiredinvolved a trial-and-error process to develop a rea- additional effort enriching the markup.sonably reliable default reader for the ‘NEWS’ ﬁles R 2.12.0 is also aware of ‘inst/NEWS.Rd’ ﬁles inin a large subset of available packages, and subse- add-on packages, which are used by news() in pref-quently documenting the common format (require- erence to the plain text ‘NEWS’ and ‘inst/NEWS’ ﬁles.ments). This reader looks for version headers and For ‘NEWS’ ﬁles in a format which can successfullysplits the news ﬁle into respective chunks. It then be handled by the default reader, package maintain-tries to read the news items from the chunks, after ers can use tools:::news2Rd(dir, "NEWS.Rd"), pos-possibly splitting according to category. For each sibly with additional argument codify = TRUE, withchunk found, it records whether (it thinks) it success- dir a character string specifying the path to a pack-fully processed the chunk. To assess how well this age’s root directory. Upon success, the ‘NEWS.Rd’reader actually works, we apply it to all ‘NEWS’ ﬁles ﬁle can be further improved and then be moved toin the CRAN packages. As of 2010-09-11, there are the ‘inst’ subdirectory of the package source direc-427 such ﬁles for a “news coverage” of about 20%. tory. For now, the immediate beneﬁts of switchingFor 67 ﬁles, no version chunks are found. For 40 to the new format is that news() will work more reli-ﬁles, all chunks are marked as bad (indicating that ably, and that the package news web pages on CRANwe found some version headers, but the ﬁles really will look better.use a different format). For 60 ﬁles, some chunks are As packages switch to Rd format for their news,bad, with the following summary of the frequencies additional services taking advantage of the new for-of bad chunks: mat can be developed and deployed. For the near future, we are planning to integrate package news Min. 1st Qu. Median Mean 3rd Qu. into the on-line help system, and to optionally inte-0.005747 0.057280 0.126800 0.236500 0.263800 Max. grate the news when generating the package refer-0.944400 ence manuals. A longer term goal is to enhance the package management system to include news com-Finally, for 260 ﬁles, no chunks are marked as bad, putations in apt-listchanges style. We hope thatsuggesting a success rate of about 61 percent. Given repository “news providers” will take advantage ofthe lack of formal standardization, this is actually the available infrastructure to enhance their news“not bad for a start”, but certainly not good enough. services (e.g., using news diffs instead of, or in ad-The R Journal Vol. 2/2, December 2010 ISSN 2073-4859
76.
76 F ROM THE C OREdition to, the currently employed recursive diff sum- Kurt Hornikmaries). But perhaps most importantly, we hope that Department of Finance, Accounting and Statisticsthe new format will make it more attractive for pack- Wirtschaftsuniversität Wienage maintainers to actually provide valuable news Augasse 2–6, 1090 Wieninformation. Austria Kurt.Hornik@R-project.orgBibliography Duncan Murdoch Department of Statistical and Actuarial SciencesDuncan Murdoch and Simon Urbanek. The new R University of Western Ontario help system. The R Journal, 1(2):60–65, 2009. London, Ontario, CanadaAchim Zeileis. CRAN task views. R News, 5(1):39–40, murdoch@stats.uwo.ca May 2005.The R Journal Vol. 2/2, December 2010 ISSN 2073-4859
77.
N EWS AND N OTES 77useR! 2010by Katharine Mullen User-contributed presentations Abstract A summary of the useR! 2010 confer- The diversity of interests in the R community was ence. reﬂected in the themes of the user-contributed ses- sions. The themes were: • Bioinformatics workﬂowsThe R user conference, useR! 2010, took place on • Biostatistics (three sessions)the Gaithersburg, Maryland, USA campus of the Na- • Biostatistics workﬂowstional Institute of Standards and Technology (NIST) • Business intelligenceJuly 21-23 2010. Following the ﬁve previous useR! • Cloud computingconferences (held in Austria, Germany, Iowa, and • Commercial applications (two sessions)France), useR! 2010 focused on • Computer experiments and simulation • Data manipulation and classiﬁcation • R as the ‘lingua franca’ of data analysis and sta- • Data mining, machine learning tistical computing, • Finance and resource allocation • providing a venue to discuss and exchange • fMRI ideas on the use of R for statistical computa- tions, data analysis, visualization, and exciting • Fene expressions, genetics applications in various ﬁelds and • Grid computing • providing an overview of the new features of • Graphical User Interfaces (two sessions) the rapidly evolving R project. • High-performance-computing • Interfaces The conference drew over 450 R users hailing • Lists and objectsfrom 29 countries. The technical program was com- • Longitudinal data analysisposed of 167 contributed presentations, seven in- • MetRologyvited lectures, a poster session, and a panel discus- • Optimizationsion on ‘Challenges of Bringing R into Commercial • Parallel computingEnvironments’. The social program of the conference • Pedagogy (three sessions)included several receptions and a dinner at the Na- • Real-time computingtional Zoo. • Reproducible research and generating reports • R social networks • RuG panel discussionProgram, organizing and confer- • Social sciences • Spatio-temporal data analysisence committees • Spreadsheets and RExcel • Time seriesOrganization of the conference was thanks to indi- • Visualizationviduals participating in the following committees: In addition to sessions on these themes, researchProgram Committee: was presented in a poster session and in Kalei-Louis Bajuk-Yorgan, Dirk Eddelbuettel, John Fox, descope sessions aimed at a broad audience.Virgilio Gómez-Rubio, Richard Heiberger, TorstenHothorn, Aaron King, Jan de Leeuw, NicholasLewin-Koh, Andy Liaw, Uwe Ligges, Martin Mäch- Tutorialsler, Katharine Mullen, Heather Turner, Ravi Varad-han, H. D. Vinod, John Verzani, Alan Zaslavsky, The day before the ofﬁcial start of the conference, onAchim Zeileis July 20, the following nineteen 3-hour tutorials wereOrganizing Committee: given by R experts:Kevin Coakley, Nathan Dodder, David Gil, William • Douglas Bates: Fitting and evaluating mixedGuthrie, Olivia Lau, Walter Liggett, John Lu, models using lme4Katharine Mullen, Jonathon Phillips, Antonio Pos- • Peter Danenberg and Manuel Eugster: Liter-solo, Daniel Samarov, Ravi Varadhan ate programming with RoxygenR Conference Committee: • Karin Groothuis-Oudshoorn and Stef van Bu-Torsten Hothorn, Achim Zeileis uren: Handling missing data in R with MICEThe R Journal Vol. 2/2, December 2010 ISSN 2073-4859
78.
78 N EWS AND N OTES • Frank Harrell Jr: Statistical presentation • Uwe Ligges: Prospects and Challenges for graphics CRAN - with a glance on 64-bit Windows bi- • François Husson and Julie Josse: Exploratory naries data analysis with a special focus on clustering • Richard M. Stallman: Free Software in Ethics and multiway methods and in Practice • Uwe Ligges: My ﬁrst R package • Luke Tierney: Some possible directions for the • Daniel Samarov, Errol Strain and Elaine R engine McVey: R for Eclipse • Diethelm Würtz: The Hull, the Feasible Set, • Jing Hua Zhao: Genetic analysis of complex and the Risk Surface: A Review of the Portfolio traits Modeling Infrastructure in R/Rmetrics • Alex Zolot: Work with R on Amazon’s Cloud • Karim Chine: Elastic-R, a google docs-like por- In addition, Antonio Possolo (Chief of the Statis- tal for data analysis in the cloud tical Engineering Division at NIST) began the confer- • Dirk Eddelbuettel: Introduction to high- ence with a rousing speech to welcome participants. performance computing with R • Michael Fay: Interval censored data analysis • Virgilio Gómez-Rubio: Applied spatial data Conference-related information analysis with R • Frank Harrell Jr: Regression modeling strate- and thanks gies using the R package rms • Olivia Lau: A crash course in R programming The conference webpage http://www.R-project. • Friedrich Leisch: Sweave - Writing dynamic org/useR-2010 makes available abstracts and slides and reproducible documents associated with presentations, as well as links to • John Nash: Optimization and related nonlin- video of plenary sessions. Questions regarding ear modelling computations in R the conference may be addressed to useR-2010@R- project.org. • Brandon Whitcher, Pierre Lafaye de Micheaux, Bradley Buchsbaum and Jörg Many thanks to all those who contributed to Polzehl: Medical image analysis for structural useR! 2010. The talks, posters, ideas, and spirit and functional MRI of cooperation that R users from around the world brought to Gaithersburg made the conference a great success.Invited lectures Katharine MullenThe distinguished invited lecturers were: Structure Determination Methods Group • Mark S. Handcock: Statistical Modeling of Ceramics Division Networks in R National Institute of Standards and Technology (NIST) • Frank E. Harrell Jr: Information Allergy 100 Bureau Drive, M/S 8520 • Friedrich Leisch: Reproducible Statistical Re- Gaithersburg, MD, 20899, USA search in Practice Email: Katharine.Mullen@nist.govThe R Journal Vol. 2/2, December 2010 ISSN 2073-4859
79.
N EWS AND N OTES 79Forthcoming Events: useR! 2011The seventh international R user conference, useR! • Statistics in the social and political sciences2011, will take place at the University of Warwick, • TeachingCoventry, 16–18 August 2011. • Visualization and graphics Following previous useR! conferences, this meet- Abstracts may be submitted for the poster ses-ing of the R user community will provide a platform sion, which will be a major social event on thefor R users to discuss and exchange ideas of how R evening of the ﬁrst day of the conference, or for thecan be used for statistical computation, data analysis main program of talks. Submissions for contributedand visualization. talks will be considered for the following types of The program will consist of invited talks, user- session:contributed sessions and pre-conference tutorials. • useR! Kaleidoscope: These sessions give a broad overview of the many different applications of R and should appeal to a wide audience.Invited Talks • useR! Focus Session: These sessions cover topics of special interest and may be more technical.The invited talks represent the spectrum of interest, The scientiﬁc program is organized by membersranging from important technical developments to of the program committee, comprising Ramón Díaz-exciting applications of R, presented by experts in the Uriarte, John Fox, Romain François, Robert Gramacy,ﬁeld: Paul Hewson, Torsten Hothorn, Kate Mullen, Brian • Adrian Bowman: Modelling Three-dimensional Peterson, Thomas Petzoldt, Anthony Rossini, Barry Surfaces in R. Rowlingson, Carolin Strobl, Stefan Theussl, Heather • Lee Edlefsen: High Performance Computing in Turner, Hadley Wickham and Achim Zeileis. R. In addition to the main program of talks and new • Ulrike Grömping: Design of Experiments in R. for useR! 2011, all participants are invited to present • Wolfgang Huber: From Genomes to Phenotypes. a Lightning Talk, for which no abstract is required. • Brian Ripley: The R Development Process. These talks provide a 5-minute platform to speak on • Jonathan Rougier: Calibration and Prediction for any R-related topic and should particularly appeal to Stochastic Dynamical Systems Arising in Envi- those new to R. A variation of the pecha kucha1 and ronmental Science. ignite2 formats will be used in which speakers must • Simon Urbanek: R Graphics: Supercharged - Re- provide 15 slides to accompany their talk and each cent Advances in Visualization and Analysis of slide will be shown for 20 seconds. Large Data in R. • Brandon Whitcher: Quantitative Analysis of Medical Imaging Data in R. Pre-conference TutorialsUser-contributed Sessions Before the start of the ofﬁcial program, the following half-day tutorials will be offered on Monday, 15 Au-In the contributed sessions, presenters will share in- gust:novative and interesting uses of R, covering topics • Douglas Bates: Fitting and evaluating mixedsuch as: models using lme4 • Bayesian statistics • Roger Bivand and Edzer Pebesma: Handling and • Bioinformatics analyzing spatio-temporal data in R • Chemometrics and computational physics • Marine Cadoret and Sébastien Lê: Analysing cat- • Data mining egorical data in R • Econometrics and ﬁnance • Stephen Eglen: Emacs Speaks Statistics • Environmetrics and ecological modeling • Andrea Foulkes: High-dimensional data meth- • High performance computing ods with R • Imaging • Frank E Harrell Jr: Regression modeling strate- • Interfaces with other languages/software gies using the R package rms • Machine learning • Søren Højsgaard: Graphical models and • Multivariate statistics Bayesian networks with R • Nonparametric statistics • G. Jay Kerns: Introductory probability and • Pharmaceutical statistics statistics using R • Psychometrics • Max Kuhn: Predictive modeling with R and the • Spatial statistics caret package 1 http://en.wikipedia.org/wiki/Pecha_Kucha 2 http://en.wikipedia.org/wiki/Ignite_(event)The R Journal Vol. 2/2, December 2010 ISSN 2073-4859
80.
80 N EWS AND N OTES • Nicholas Lewin Koh: Everyday R: Statistical con- able on campus. The university is in the city of sulting with R Coventry which offers a number of places to visit • Fausto Molinari, Enrico Branca, Francesco De Fil- such as the cathedral and city art gallery. In the ippo and Rocco Claudio Cannizzaro: R-Adamant: heart of England, the city is within easy reach of Applied ﬁnancial analysis and risk manage- other attractions such as Warwick’s medieval castle ment and the new Royal Shakespeare and Swan Theatres • Martin Morgan: Bioconductor for the analysis in Stratford-upon-Avon (Shakespeare’s birthplace). of high-throughput genomic data • Paul Murrell: Introduction to grid graphics • Giovanni Petris: State space models in R • Shivani Rao: Topic modeling of large datasets Further Information with R using Amazon EMR A web page offering more information on useR! 2011, • Karline Soetaert and Thomas Petzoldt: Simulating including details regarding registration and abstract differential equation models in R submission, is available at http://www.R-project. • Antony Unwin: Graphical data analysis org/useR-2011/ • Brandon Whitcher, Jörg Polzehl, Karsten Tabelow: Medical image analysis for MRI We hope to meet you in Coventry! The organizing committee:Location & Surroundings John Aston, Julia Brettschneider, David Firth, Ashley Ford, Ioannis Kosmidis, Tom Nichols, Elke Thönnes andParticipants will be well catered for at the University Heather Turnerof Warwick, with meals and accommodation avail- useR-2011@R-project.orgThe R Journal Vol. 2/2, December 2010 ISSN 2073-4859
81.
N EWS AND N OTES 81Changes in RFrom version 2.11.1 patched to version 2.12.1 • predict(<lm object>, type ="terms", ...) failed if both terms and interval wereby the R Core Team speciﬁed. (Reported by Bill Dunlap.) Also, if se.fit = TRUE the standard errorsCHANGES IN R VERSION 2.12.1 were reported for all terms, not just those se- lected by a non-null terms.NEW FEATURES • The TRE regular expressions engine could ter- • The DVI/PDF reference manual now includes minate R rather than give an error when given the help pages for all the standard packages: certain invalid regular expressions. (PR#14398) splines, stats4 and tcltk were previously omit- • cmdscale(eig = TRUE) was documented to re- ted (intentionally). turn n − 1 eigenvalues but in fact only returned • http://www.rforge.net has been added to k. It now returns all n eigenvalues. the default set of repositories known to cmdscale(add = TRUE) failed to centre the re- setRepositories(). turn conﬁguration and sometimes lost the la- bels on the points. Its return value was de- • xz-utils has been updated to version 5.0.0. scribed wrongly (it is always a list and contains • reshape() now makes use of sep when form- component ac). ing names during reshaping to wide format. • promptClass() in package methods now (PR#14435) works for reference classes and gives a suitably • legend() allows the length of lines to be set by specialized skeleton of documentation. the end user via the new argument seg.len. Also, callSuper() now works via the methods() invocation as well as for initially • New reference class utility methods copy(), speciﬁed methods. field(), getRefClass() and getClass() have been added. • download.file() could leave the destination ﬁle open if the URL was not able to be opened. • When a character value is used for the EXPR ar- (PR#14414) gument in switch(), a warning is given if more than one unnamed alternative value is given. • Assignment of an environment to functions or This will become an error in R 2.13.0. as an attribute to other objects now works for S4 subclasses of "environment". • StructTS(type = "BSM") now allows series with just two seasons. (Reported by Birgit • Use of ‘[[<-’ for S4 subclasses of Erni.) "environment" generated an inﬁnite re- cursion from the method. The method hasINSTALLATION been replaced by internal code. • The PDF reference manual is now built as PDF • In a reference class S4 method, callSuper() version 1.5 with object compression, which on now works in initialize() methods when platforms for which this is not the default (no- there is no explicit superclass method. tably MiKTeX) halves its size. • ‘!’ dropped attributes such as names and • Variable FCLIBS can be set during conﬁgura- dimensions from a length-zero argument. tion, for any additional library ﬂags needed (PR#14424) when linking a shared object with the Fortran • When list2env() created an environment it 9x compiler. (Needed with Solaris Studio 12.2.) was missing a PROTECT call and so was vulner- able to garbage collection.BUG FIXES • Sweave() with keep.source=TRUE dropped • seq.int() no longer sometimes evaluates ar- comments at the start and end of code chunks. guments twice. (PR#14388) It could also fail when ‘SweaveInput’ was combined with named chunks. • The data.frame method of format() failed if a column name was longer than 256 bytes (the • The Fortran code used by nls(algorithm = maximum length allowed for an R name). "port") could inﬁnite-loop when compiledThe R Journal Vol. 2/2, December 2010 ISSN 2073-4859
82.
82 N EWS AND N OTES with high optimization on a modern version of • Rendering raster images in PDF output was re- gcc, and SAFE_FFLAGS is now used to make this setting the clipping region. less likely. (PR#14427, seen with 32-bit Win- dows using gcc 4.5.0 used from R 2.12.0.) • Rendering of raster images on Cairo X11 device was wrong, particularly when a small image • sapply() with default simplify = TRUE was being scaled up using interpolation. and mapply() with default SIMPLIFY = TRUE With Cairo < 1.6, will be better than before, wrongly simpliﬁed language-like results, though still a little clunky. With Cairo >= 1.6, as, e.g., in mapply(1:2, c(3,7), FUN = should be sweet as. function(i,j) call(’:’,i,j)). • Several bugs ﬁxed in read.DIF(): single col- • Backreferences to undeﬁned patterns in umn inputs caused errors, cells marked as [g]sub(pcre = TRUE) could cause a segfault. "character" could be converted to other types, (PR#14431) and (in Windows) copying from the clipboard failed. • The format() (and hence the print()) method for class "Date" rounded fractional dates to- wards zero: it now always rounds them down. CHANGES IN R VERSION 2.12.0 • Reference S4 class creation could generate am- NEW FEATURES biguous inheritance patterns under very spe- cial circumstances. • Reading a package’s ‘CITATION’ ﬁle now de- faults to ASCII rather than Latin-1: a package • ‘[[<-’ turned S4 subclasses of "environment" with a non-ASCII ‘CITATION’ ﬁle should declare into plain environments. an encoding in its ‘DESCRIPTION’ ﬁle and use that encoding for the ‘CITATION’ ﬁle. • Long titles for help pages were truncated in package indices and a few other places. • difftime() now defaults to the "tzone" at- tribute of "POSIXlt" objects rather than to the • Additional utilities now work correctly with S4 current timezone as set by the default for the subclasses of "environment" (rm, locking tools tz argument. (Wish of PR#14182.) and active bindings). • pretty() is now generic, with new methods • spec.ar() now also work for the "ols" for "Date" and "POSIXt" classes (based on code method. (Reported by Hans-Ruedi Kuensch.) contributed by Felix Andrews). • The initialization of objects from S4 subclasses • unique() and match() are now faster on char- of "environment" now allocates a new environ- acter vectors where all elements are in the ment object. global CHARSXP cache and have unmarked en- coding (ASCII). Thanks to Matthew Dowle for • R CMD check has more protection against suggesting improvements to the way the hash (probably erroneous) example or test output code is generated in ‘unique.c’. which is invalid in the current locale. • The enquote() utility, in use internally, is ex- • qr.X() with column names and pivoting now ported now. also pivots the column names. (PR#14438) • .C() and .Fortran() now map non-zero return • unit.pmax() and unit.pmin() in package grid values (other than NA_LOGICAL) for logical vec- gave incorrect results when all inputs were of tors to TRUE: it has been an implicit assumption length 1. (PR#14443) that they are treated as true. • The parser for ‘NAMESPACE’ ﬁles ignored mis- • The print() methods for "glm" and "lm" ob- spelled directives, rather than signalling an er- jects now insert linebreaks in long calls in ror. For 2.12.x a warning will be issued, but this the same way that the print() methods for will be correctly reported as an error in later re- "summary.[g]lm" objects have long done. This leases. (Reported by Charles Berry.) does change the layout of the examples for a number of packages, e.g. MASS. (PR#14250) • Fix for subsetting of "raster" objects when • constrOptim() can now be used with method only one of i or j is speciﬁed. "SANN". (PR#14245) • grid.raster() in package grid did not accept It gains an argument hessian to be passed to "nativeRaster" objects (like rasterImage() optim(), which allows all the ... arguments to does). be intended for f() and grad(). (PR#14071)The R Journal Vol. 2/2, December 2010 ISSN 2073-4859
83.
N EWS AND N OTES 83 • curve() now allows expr to be an object of • The startup message now includes the plat- mode "expression" as well as "call" and form and if used, sub-architecture: this is use- "function". ful where different (sub-)architectures run on the same OS. • The "POSIX[cl]t" methods for Axis() have been replaced by a single method for "POSIXt". • The getGraphicsEvent() mechanism There are no longer separate plot() meth- now allows multiple windows to re- ods for "POSIX[cl]t" and "Date": the default turn graphics events, through the new method has been able to handle those classes functions setGraphicsEventHandlers(), for a long time. This inter alia allows a sin- setGraphicsEventEnv(), and gle date-time object to be supplied, the wish of getGraphicsEventEnv(). (Currently im- PR#14016. plemented in the windows() and X11() devices.) The methods had a different default ("") for xlab. • tools::texi2dvi() gains an index argument, mainly for use by R CMD Rd2pdf. • Classes "POSIXct", "POSIXlt" and "difftime" It avoids the use of texindy by texinfo’s have generators .POSIXct(), .POSIXlt() and texi2dvi >= 1.157, since that does not emu- .difftime(). Package authors are advised to late ’makeindex’ well enough to avoid prob- make use of them (they are available from R lems with special characters (such as ‘(’, ‘{’, ‘!’) 2.11.0) to proof against planned future changes in indices. to the classes. The ordering of the classes has been changed, • The ability of readLines() and scan() to re- so "POSIXt" is now the second class. See the encode inputs to marked UTF-8 strings on Win- document ‘Updating packages for changes in dows since R 2.7.0 is extended to non-UTF-8 lo- R 2.12.x’ on http://developer.r-project.org cales on other OSes. for the consequences for a handful of CRAN • scan() gains a fileEncoding argument to packages. match read.table(). • The "POSIXct" method of as.Date() allows a • points() and lines() gain "table" methods timezone to be speciﬁed (but still defaults to to match plot(). (Wish of PR#10472.) UTC). • Sys.chmod() allows argument mode to be a vec- • New list2env() utility function as an in- tor, recycled along paths. verse of as.list(<environment>) and for fast multi-assign() to existing environment. • There are |, & and xor() methods for classes as.environment() is now generic and uses "octmode" and "hexmode", which work bitwise. list2env() as list method. • Environment variables R_DVIPSCMD, • There are several small changes to output R_LATEXCMD, R_MAKEINDEXCMD, R_PDFLATEXCMD which ‘zap’ small numbers, e.g. in print- are no longer used nor set in an R ses- ing quantiles of residuals in summaries from sion. (With the move to tools::texi2dvi(), "lm" and "glm" ﬁts, and in test statistics in the conventional environment variables print.anova(). LATEX, MAKEINDEX and PDFLATEX will be used. options("dvipscmd") defaults to the value of • Special names such as "dim", "names", etc, are DVIPS, then to "dvips".) now allowed as slot names of S4 classes, with "class" the only remaining exception. • New function isatty() to see if terminal con- nections are redirected. • File ‘.Renviron’ can have architecture-speciﬁc versions such as ‘.Renviron.i386’ on systems • summaryRprof() returns the sampling interval with sub-architectures. in component sample.interval and only re- turns in by.self data for functions with non- • installed.packages() has a new argument zero self times. subarch to ﬁlter on sub-architecture. • print(x) and str(x) now indicate if an empty • The summary() method for packageStatus() list x is named. now has a separate print() method. • install.packages() and remove.packages() • The default summary() method returns an ob- with lib unspeciﬁed and multiple libraries in ject inheriting from class "summaryDefault" .libPaths() inform the user of the library lo- which has a separate print() method that calls cation used with a message rather than a warn- zapsmall() for numeric/complex values. ing.The R Journal Vol. 2/2, December 2010 ISSN 2073-4859
84.
84 N EWS AND N OTES • There is limited support for multiple com- • Added a new parseLatex() function (and pressed streams on a ﬁle: all of [bgx]zfile() related functions deparseLatex() and allow streams to be appended to an existing latexToUtf8()) to support conversion of ﬁle, but bzfile() reads only the ﬁrst stream. bibliographic entries for display in R. • Function person() in package utils now • Text rendering of ‘itemize’ in help uses a Uni- uses a given/family scheme in preference to code bullet in UTF-8 and most single-byte Win- ﬁrst/middle/last, is vectorized to handle an ar- dows locales. bitrary number of persons, and gains a role argument to specify person roles using a con- • Added support for polygons with holes to the trolled vocabulary (the MARC relator terms). graphics engine. This is implemented for the pdf(), postscript(), x11(type="cairo"), • Package utils adds a new "bibentry" class for windows(), and quartz() devices (and representing and manipulating bibliographic associated raster formats), but not for information in enhanced BibTeX style, unifying x11(type="Xlib") or xfig() or pictex(). and enhancing the previously existing mecha- The user-level interface is the polypath() nisms. function in graphics and grid.path() in grid. • A bibstyle() function has been added to the • File ‘NEWS’ is now generated at installation tools package with default JSS style for render- with a slightly different format: it will be in ing "bibentry" objects, and a mechanism for UTF-8 on platforms using UTF-8, and other- registering other rendering styles. wise in ASCII. There is also a PDF version, ‘NEWS.pdf’, installed at the top-level of the R • Several aspects of the display of text distribution. help are now customizable using the new Rd2txt_options() function. op- • kmeans(x, 1) now works. Further, kmeans now tions("help_text_width") is no longer used. returns between and total sum of squares. • arrayInd() and which() gain an argument • Added ‘href’ tag to the Rd format, to allow useNames. For arrayInd, the default is now hyperlinks to URLs without displaying the full false, for speed reasons. URL. • As is done for closures, the default print • Added ‘newcommand’ and ‘renewcommand’ method for the formula class now displays the tags to the Rd format, to allow user-deﬁned associated environment if it is not the global en- macros. vironment. • New toRd() generic in the tools package • A new facility has been added for insert- to convert objects to fragments of Rd code, ing code into a package without re-installing and added "fragment" argument to Rd2txt(), it, to facilitate testing changes which can Rd2HTML(), and Rd2latex() to support it. be selectively added and backed out. See ?insertSource. • Directory ‘R_HOME/share/texmf’ now follows the TDS conventions, so can be set as a texmf • New function readRenviron to (re-)read ﬁles in tree (‘root directory’ in MiKTeX parlance). the format of ‘~/.Renviron’ and ‘Renviron.site’. • S3 generic functions now use correct S4 inher- • require() will now return FALSE (and not fail) itance when dispatching on an S4 object. See if loading the package or one of its dependen- ?Methods, section on “Methods for S3 Generic cies fails. Functions” for recommendations and details. • aperm() now allows argument perm to be a • format.pval() gains a ... argument to pass character vector when the array has named arguments such as nsmall to format(). (Wish dimnames (as the results of table() calls do). of PR#9574) Similarly, array() allows MARGIN to be a char- acter vector. (Based on suggestions of Michael • legend() supports title.adj. (Wish of Lachmann.) PR#13415) • Package utils now exports and documents • Added support for subsetting "raster" ob- functions aspell_package_Rd_files() and jects, plus assigning to a subset, conversion to aspell_package_vignettes() for spell check- a matrix (of colour strings), and comparisons ing package Rd ﬁles and vignettes using (‘==’ and ‘!=’). Aspell, Ispell or Hunspell.The R Journal Vol. 2/2, December 2010 ISSN 2073-4859
85.
N EWS AND N OTES 85 • Package news can now be given in Rd format, • The predict method for "loess" ﬁts gains an and news() prefers these ‘inst/NEWS.Rd’ ﬁles to na.action argument which defaults to na.pass old-style plain text ‘NEWS’ or ‘inst/NEWS’ ﬁles. rather than the previous default of na.omit. • New simple function packageVersion(). Predictions from "loess" ﬁts are now named from the row names of newdata. • The PCRE library has been updated to version 8.10. • Parsing errors detected during Sweave() pro- cessing will now be reported referencing their • The standard Unix-alike terminal interface de- original location in the source ﬁle. clares its name to readline as ’R’, so that can be used for conditional sections in ‘~/.inputrc’ ﬁles. • New adjustcolor() utility, e.g., for simple translucent color schemes. • ‘Writing R Extensions’ now stresses that the standard sections in ‘.Rd’ ﬁles (other than • qr() now has a trivial lm method with a simple ‘alias’, ‘keyword’ and ‘note’) are intended (fast) validity check. to be unique, and the conversion tools now • An experimental new programming model has drop duplicates with a warning. been added to package methods for refer- The ‘.Rd’ conversion tools also warn about an ence (OOP-style) classes and methods. See unrecognized type in a ‘docType’ section. ?ReferenceClasses. • ecdf() objects now have a quantile() method. • bzip2 has been updated to version 1.0.6 (bug- ﬁx release). ‘--with-system-bzlib’ now re- • format() methods for date-time objects now quires at least version 1.0.6. attempt to make use of a "tzone" attribute with "%Z" and "%z" formats, but it is not always pos- • R now provides ‘jss.cls’ and ‘jss.bst’ (the class sible. (Wish of PR#14358.) and bib style ﬁle for the Journal of Statis- tical Software) as well as ‘RJournal.bib’ and • tools::texi2dvi(file, clean = TRUE) now ‘Rnews.bib’, and R CMD ensures that the ‘.bst’ works in more cases (e.g. where emulation is and ‘.bib’ ﬁles are found by BibTeX. used and when ‘ﬁle’ is not in the current direc- tory). • Functions using the TAR environment vari- able no longer quote the value when mak- • New function droplevels() to remove unused ing system calls. This allows values such factor levels. as ‘tar --force-local’, but does require ad- • system(command, intern = TRUE) now gives ditional quotes in, e.g., TAR = "’/path with an error on a Unix-alike (as well as on Win- spaces/mytar’". dows) if command cannot be run. It reports a non-success exit status from running command DEPRECATED & DEFUNCT as a warning. • Supplying the parser with a character string On a Unix-alike an attempt is made to re- containing both octal/hex and Unicode es- turn the actual exit status of the command capes is now an error. in system(intern = FALSE): previously this had been system-dependent but on POSIX- • File extension ‘.C’ for C++ code ﬁles in pack- compliant systems the value return was 256 ages is now defunct. times the status. • R CMD check no longer supports conﬁguration • system() has a new argument ignore.stdout ﬁles containing Perl conﬁguration variables: which can be used to (portably) ignore stan- use the environment variables documented in dard output. ‘R Internals’ instead. • system(intern = TRUE) and pipe() connec- • The save argument of require() now defaults tions are guaranteed to be available on all to FALSE and save = TRUE is now deprecated. builds of R. (This facility is very rarely actually used, and was superseded by the ‘Depends’ ﬁeld of the • Sys.which() has been altered to return "" if the ‘DESCRIPTION’ ﬁle long ago.) command is not found (even on Solaris). • R CMD check -no-latex is deprecated in • A facility for deﬁning reference-based S4 favour of ‘--no-manual’. classes (in the OOP style of Java, C++, etc.) has been added experimentally to package meth- • R CMD Sd2Rd is formally deprecated and will be ods; see ?ReferenceClasses. removed in R 2.13.0.The R Journal Vol. 2/2, December 2010 ISSN 2073-4859
86.
86 N EWS AND N OTESPACKAGE INSTALLATION all those which can be run on the current OS are used. • install.packages() has a new argument libs_only to optionally pass ‘--libs-only’ to R Unless multiple sub-architectures are selected, CMD INSTALL and works analogously for Win- the install done by check for testing purposes is dows binary installs (to add support for 64- or only of the current sub-architecture (via R CMD 32-bit Windows). INSTALL -no-multiarch). It will skip the check for non-ascii char- • When sub-architectures are in use, the installed acters in code or data if the environ- architectures are recorded in the Archs ﬁeld of ment variables _R_CHECK_ASCII_CODE_ or the ‘DESCRIPTION’ ﬁle. There is a new default _R_CHECK_ASCII_DATA_ are respectively set to ﬁlter, "subarch", in available.packages() to FALSE. (Suggestion of Vince Carey.) make use of this. Code is compiled in a copy of the ‘src’ direc- • R CMD build no longer creates an ‘INDEX’ ﬁle tory when a package is installed for more than (R CMD INSTALL does so), and -force removes one sub-architecture: this avoid problems with (rather than overwrites) an existing ‘INDEX’ cleaning the sources between building sub- ﬁle. architectures. It supports a ﬁle ‘~/.R/build.Renviron’ analo- gously to check. • R CMD INSTALL -libs-only no longer over- It now runs build-time ‘Sexpr’ expressions in rides the setting of locking, so a previous ver- help ﬁles. sion of the package will be restored unless ‘--no-lock’ is speciﬁed. • R CMD Rd2dvi makes use of tools::texi2dvi() to process the packageUTILITIES manual. It is now implemented entirely in R (rather than partially as a shell script). • R CMD Rprof|build|check are now based on R rather than Perl scripts. The only re- • R CMD Rprof now uses maining Perl scripts are the deprecated R utils::summaryRprof() rather than Perl. CMD Sd2Rd and install-info.pl (used only if It has new arguments to select one of the tables install-info is not found) as well as some and to limit the number of entries printed. maintainer-mode-only scripts. • R CMD Sweave now runs R with -vanilla so the NB: because these have been completely environment setting of R_LIBS will always be rewritten, users should not expect undocu- used. mented details of previous implementations to have been duplicated. C-LEVEL FACILITIES R CMD no longer manipulates the environment variables PERL5LIB and PERLLIB. • lang5() and lang6() (in addition to pre- existing lang[1-4]()) convenience functions • R CMD check has a new argument for easier construction of eval() calls. If you ‘--extra-arch’ to conﬁne tests to those needed have your own deﬁnition, do wrap it inside to check an additional sub-architecture. #ifndef lang5 .... #endif to keep it work- Its check for “Subdirectory ’inst’ contains no ing with old and new R. ﬁles” is more thorough: it looks for ﬁles, and • Header ‘R.h’ now includes only the C headers it warns if there are only empty directories. itself needs, hence no longer includes errno.h. Environment variables such as R_LIBS and (This helps avoid problems when it is included those used for customization can be set from C++ source ﬁles.) for the duration of checking via a ﬁle ‘~/.R/check.Renviron’ (in the format used by • Headers ‘Rinternals.h’ and ‘R_ext/Print.h’ include ‘.Renviron’, and with sub-architecture speciﬁc the C++ versions of ‘stdio.h’ and ‘stdarg.h’ re- versions such as ‘~/.R/check.Renviron.i386’ tak- spectively if included from a C++ source ﬁle. ing precedence). There are new options ‘--multiarch’ to check INSTALLATION the package under all of the installed sub- • A C99 compiler is now required, and more C99 architectures and ‘--no-multiarch’ to conﬁne language features will be used in the R sources. checking to the sub-architecture under which check is invoked. If neither option is supplied, • Tcl/Tk >= 8.4 is now required (increased from a test is done of installed sub-architectures and 8.3).The R Journal Vol. 2/2, December 2010 ISSN 2073-4859
87.
N EWS AND N OTES 87 • System functions access, chdir and getcwd are • The signature of an implicit generic is now now essential to conﬁgure R. (In practice they used by setGeneric() when that does not use have been required for some time.) a deﬁnition nor explicitly set a signature. • make check compares the output of the exam- • A bug in callNextMethod() for some examples ples from several of the base packages to ref- with "..." in the arguments has been ﬁxed. erence output rather than the previous output See ﬁle ‘src/library/methods/tests/nextWithDots.R’ (if any). Expect some differences due to differ- in the sources. ences in ﬂoating-point computations between • match(x, table) (and hence %in%) now treat platforms. "POSIXlt" consistently with, e.g., "POSIXct". • File ‘NEWS’ is no longer in the sources, but gen- • Built-in code dealing with environ- erated as part of the installation. The primary ments (get(), assign(), parent.env(), source for changes is now ‘doc/NEWS.Rd’. is.environment() and others) now behave consistently to recognize S4 subclasses; • The popen system call is now required to is.name() also recognizes subclasses. build R. This ensures the availability of system(intern = TRUE), pipe() connections • The abs.tol control parameter to nlminb() and printing from postscript(). now defaults to 0.0 to avoid false declarations of convergence in objective functions that may • The pkg-config ﬁle ‘libR.pc’ now also works go negative. when R is installed using a sub-architecture. • The standard Unix-alike termination dialog to • R has always required a BLAS that conforms to ask whether to save the workspace takes a EOF IE60559 arithmetic, but after discovery of more response as n to avoid problems with a dam- real-world problems caused by a BLAS that did aged terminal connection. (PR#14332) not, this is tested more thoroughly in this ver- • Added warn.unused argument to sion. hist.default() to allow suppression of spurious warnings about graphical parametersBUG FIXES used with plot=FALSE. (PR#14341) • Calls to selectMethod() by default no longer • predict.lm(), summary.lm(), and indeed lm() cache inherited methods. This could previ- itself had issues with residual DF in zero- ously corrupt methods used by as(). weighted cases (the latter two only in connec- tion with empty models). (Thanks to Bill Dun- • The densities of non-central chi-squared are lap for spotting the predict() case.) now more accurate in some cases in the ex- • aperm() treated resize = NA as resize = treme tails, e.g. dchisq(2000, 2, 1000), as TRUE. a series expansion was truncated too early. (PR#14105) • constrOptim() now has an improved conver- gence criterion, notably for cases where the • pt() is more accurate in the left tail for ncp minimum was (very close to) zero; further, large, e.g. pt(-1000, 3, 200). (PR#14069) other tweaks inspired from code proposals by Ravi Varadhan. • The default C function (R_binary) for binary ops now sets the S4 bit in the result if either • Rendering of S3 and S4 methods in man pages argument is an S4 object. (PR#13209) has been corrected and made consistent across output formats. • source(echo=TRUE) failed to echo comments that followed the last statement in a ﬁle. • Simple markup is now allowed in ‘title’ sec- tions in ‘.Rd’ ﬁles. • S4 classes that contained one of "matrix", "array" or "ts" and also another class now ac- • The behaviour of as.logical() on factors (to cept superclass objects in new(). Also ﬁxes fail- use the levels) was lost in R 2.6.0 and has been ure to call validObject() for these classes. restored. • prompt() did not backquote some default ar- • Conditional inheritance deﬁned by argument guments in the ‘usage’ section. (Reported by test in methods::setIs() will no longer be Claudia Beleites.) used in S4 method selection (caching these methods could give incorrect results). See • writeBin() disallows attempts to write 2GB or ?setIs. more in a single call. (PR#14362)The R Journal Vol. 2/2, December 2010 ISSN 2073-4859
88.
88 N EWS AND N OTES • new() and getClass() will now work if Class • model.frame(drop.unused.levels = TRUE) is a subclass of "classRepresentation" and did not take into account NA values of factors should also be faster in typical calls. when deciding to drop levels. (PR#14393) • The summary() method for data frames makes • library.dynam.unload required an absolute a better job of names containing characters in- path for libpath. (PR#14385) valid in the current locale. Both library() and loadNamespace() now • [[ sub-assignment for factors could create an record absolute paths for use by searchpaths() invalid factor (reported by Bill Dunlap). and getNamespaceInfo(ns, "path"). • Negate(f) would not evaluate argument f un- • The self-starting model NLSstClosestX failed if til ﬁrst use of returned function (reported by some deviation was exactly zero. (PR#14384) Olaf Mersmann). • X11(type = "cairo") (and other devices such as png using cairographics) and which use • quietly=FALSE is now also an optional argu- Pango font selection now work around a bug in ment of library(), and consequently, quietly Pango when very small fonts (those with sizes is now propagated also for loading dependent between 0 and 1 in Pango’s internal units) are packages, e.g., in require(*, quietly=TRUE). requested. (PR#14369) • If the loop variable in a for loop was deleted, • Added workaround for the font problem with it would be recreated as a global variable. (Re- X11(type = "cairo") and similar on Mac OS ported by Radford Neal; the ﬁx includes his op- X whereby italic and bold styles were inter- timizations as well.) changed. (PR#13463 amongst many other re- • Task callbacks could report the wrong expres- ports.) sion when the task involved parsing new code. • source(chdir = TRUE) failed to reset the work- (PR#14368) ing directory if it could not be determined – • getNamespaceVersion() failed; this was an ac- that is now an error. cidental change in 2.11.0. (PR#14374) • Fix for crash of example(rasterImage) on • identical() returned FALSE for external x11(type="Xlib"). pointer objects even when the pointer ad- • Force Quartz to bring the on-screen display dresses were the same. up-to-date immediately before the snapshot is • L$a@x[] <- val did not duplicate in a case it taken by grid.cap() in the Cocoa implementa- should have. tion. (PR#14260) • tempfile() now always gives a random ﬁle • model.frame had an unstated 500 byte limit on name (even if the directory is speciﬁed) when variable names. (Example reported by Terry called directly after startup and before the R Therneau.) RNG had been used. (PR#14381) • The 256-byte limit on names is now docu- • quantile(type=6) behaved inconsistently. mented. (PR#14383) • Subassignment by [, [[ or $ on an expression • backSpline(.) behaved incorrectly when the object with value NULL coerced the object to a knot sequence was decreasing. (PR#14386) list. • The reference BLAS included in R was as- suming that 0*x and x*0 were always zero CHANGES IN R VERSION 2.11.1 (whereas they could be NA or NaN in IEC 60559 patched arithmetic). This was seen in results from tcrossprod, and for example that log(0) %*% NEW FEATURES 0 gave 0. • install.packages() has a new optional argu- • The calculation of whether text was completely ment INSTALL_opts which can be used to pass outside the device region (in which case, you options to R CMD INSTALL for source-package draw nothing) was wrong for screen devices installs. (which have [0, 0] at top-left). The symp- tom was (long) text disappearing when re- • R CMD check now runs the package-speciﬁc sizing a screen window (to make it smaller). tests with LANGUAGE=en to facilitate comparison (PR#14391) to ‘.Rout.save’ ﬁles.The R Journal Vol. 2/2, December 2010 ISSN 2073-4859
89.
N EWS AND N OTES 89 • sessionInfo() gives more detailed platform • The postscript() and pdf() devices will now information, including 32/64-bit and the sub- allow lwd values less than 1 (they used to force architecture if one is used. such values to be 1). • Fixed font face for CID fonts in pdf() graphicsDEPRECATED & DEFUNCT output. (PR#14326) • The use of Perl conﬁguration variables for R • GERaster() now checks for width or height of CMD check (as previously documented in ‘Writ- zero and does nothing in those cases; previ- ing R Extensions’) is deprecated and will be re- ously the behaviour was undeﬁned, probably moved in R 2.12.0. Use the environment vari- device-speciﬁc, and possibly dangerous. ables documented in ‘R Internals’ instead. • wilcox.test(x, y, conf.int = TRUE) failed with an unhelpful message if x and y were con-BUG FIXES stant vectors, and similarly in the one-sample • R CMD Rd2dvi failed if run from a path contain- case. (PR#14329) ing space(s). This also affected R CMD check, which calls Rd2dvi. • Improperly calling Recall() from outside a function could cause a segfault. (Reported by • stripchart() could fail with an empty factor Robert McGehee.) level. (PR#14317) • ‘Sexpr[result=rd]’ in an Rd ﬁle added a spu- • Text help rendering of ‘tabular{}’ has been rious newline, which was displayed as extra improved: under some circumstances leading whitespace when rendered. blank columns were not rendered. • require(save = TRUE) recorded the names of • strsplit(x, fixed=TRUE) marked UTF-8 packages it failed to load. strings with the local encoding when no splits were found. • packageStatus() could return a data frame with duplicate row names which could then • weighted.mean(NA, na.rm=TRUE) and similar not be printed. now returns NaN again, as it did prior to R 2.10.0. • txtProgressBar(style = 2) did not work cor- rectly. • R CMD had a typo in its detection of whether the txtProgressBar(style = 3) did not display environment variable TEXINPUTS was set (re- until a non-minimum value was set. ported by Martin Morgan). • contour() did not display dashed line types • The command-line parser could mistake properly when contour lines were labelled. ‘--file=size...’ for one of the options for (Reported by David B. Thompson.) setting limits for Ncells or Vcells. • tools::undoc() again detects undocumented • The internal strptime() could corrupt its copy data objects. Of course, this also affects R CMD of the timezone which would then lead to spu- check. rious warnings. (PR#14338) • ksmooth(x,NULL) no longer segfaults. • dir.create(recursive = TRUE) could fail if one of the components existed but was a direc- • approxfun(), approx(), splinefun() and tory on a read-only ﬁle system. (Seen on So- spline() could be confused by x values laris, where the error code returned is not even that were different but so close as to print listed as possible on the man page.) identically. (PR#14377)The R Journal Vol. 2/2, December 2010 ISSN 2073-4859
90.
90 N EWS AND N OTESChanges on CRAN2010-06-04 to 2010-12-12 Psychometrics QuantPsyc, REQS, catR, equate, plsRglm.by Kurt Hornik and Achim Zeileis Spatial CompRandFld, MarkedPointProcess, con- strainedKriging, divagis, geosphere, raster∗ ,New CRAN task views sparr, sphet, vardiag.OfﬁcialStatistics Topic: Ofﬁcial Statistics & Sur- Survival aster, aster2, censReg, exactRankTests, vey Methodology. Maintainer: Matthias glrt, saws, simexaft, spef, splinesurv, Templ. Packages: Amelia, EVER, Hmisc, survPresmooth. MImix, RecordLinkage, SDaA, SeqKnn, Stat- TimeSeries MARSS, TSAgg, mondate, pomp. Match, TeachingSampling, VIM, cat, im- pute, ineq, laeken, lme4, memisc, mi, micE- gR catnet. con, mice, missMDA, mitools, mix, nlme, (* = core package) norm, odfWeave.survey, pan, pps, reweight, robCompositions, rrcovNA, sampﬂing, sam- pling, samplingbook, sdcMicro, sdcTable, New contributed packages simFrame, simPopulation, spsurvey, stratiﬁ- cation, survey∗ , surveyNG, urn, x12, yaIm- ACCLMA ACC & LMA Graph Plotting. Authors: pute. Tal Carmi, Liat Gaziel.ReproducibleResearch Topic: Reproducible Re- AMA Anderson-Moore Algorithm. Authors: Gary search. Maintainer: Max Kuhn. Packages: Anderson, Aneesh Raghunandan. Design∗ , Hmisc∗ , R.cache, R.rsp, R2HTML∗ , R2PPT, R2wd, SRPM, SweaveListingUtils, AllPossibleSpellings Computes all of a word’s pos- TeachingSampling, animation, apsrtable, sible spellings. Author: Antoine Tremblay, ascii, brew, cacheSweave, cxxPack, ex- IWK Health Center. ams, highlight, hwriter, memisc, odfWeave, BMS Bayesian Model Averaging Library. Authors: odfWeave.survey, pgfSweave, quantreg, r2lh, Martin Feldkircher and Stefan Zeugner. In reporttools, rms∗ , svSweave, tikzDevice, view: Bayesian. xtable∗ . BayHap Bayesian analysis of haplotype association(* = core package) using Markov Chain Monte Carlo. Authors: Raquel Iniesta and Victor Moreno.New packages in CRAN task views Benchmarking Benchmark and frontier analysis us-Bayesian Ratings, RxCEcolInf, SimpleTable, ing DEA and SFA (BeDS). Authors: Peter spikeslab. Bogetoft and Lars Otto.ChemPhys OrgMassSpecR, plsRglm∗ , solaR. BlakerCI Blaker’s binomial conﬁdence limits. Au- thor: J. Klaschka.ClinicalTrials DoseFinding, MCPMod. CFL Compensatory Fuzzy Logic. Authors: PabloDistributions mc2d, nacopula, truncnorm. Michel Marin Ortega, Kornelius Rohmeyer.Environmetrics DSpat, SPACECAP, secr. CNVassoc Association analysis of CNV data. Au- thors: Juan R González, Isaac Subirana.ExperimentalDesign DoseFinding. COSINE COndition SpecIﬁc sub-NEtwork. Author:Finance orderbook, rrv, schwartz97. Haisu Ma.HighPerformanceComputing gcbd, magma, rpud. COUNT Functions, data and code for count data. Author: Joseph M Hilbe.MachineLearning Boruta, LiblineaR, LogicForest, LogicReg, RSNNS, SDDA, hda, quantregFor- Ckmeans.1d.dp Optimal distance-based clustering est, rda, rgp, sda. for one-dimensional data. Authors: Joe Song and Haizhou Wang.MedicalImaging RNiftyReg∗ , dpmixsim∗ , mritc∗ . ClustOfVar Clustering of variables. Authors: MarieOptimization Rcgmin, Rsolnp∗ , Rvmmin, minqa∗ , Chavent and Vanessa Kuentz and Benoit Li- optimx∗ . quet and Jerome Saracco.The R Journal Vol. 2/2, December 2010 ISSN 2073-4859
91.
N EWS AND N OTES 91CompQuadForm Distribution function of quadratic GWASExactHW Exact Hardy-Weinburg testing for forms in normal variables. Author: P. Lafaye Genome Wide Association Studies. Author: de Micheaux. Ian Painter, GENEVA project, University of Washington.CompRandFld Composite-likelihood based Anal- ysis of Random Fields. Authors: Simone HDclassif High Dimensionnal Classiﬁcation. Au- Padoan, Moreno Bevilacqua. In view: Spatial. thors: R. Aidan, L. Berge, C. Bouveyron, S. Gi- rard.DCGL Differential Coexpression Analysis of Gene Expression Microarray Data. Authors: Bao- HWEintrinsic Objective Bayesian Testing for the Hong Liu, Hui Yu. Hardy-Weinberg Equilibrium Problem. Au- thor: Sergio Venturini.DECIDE DEComposition of Indirect and Direct Ef- HapEstXXR Haplotype-based analysis of associa- fects. Author: Christiana Kartsonaki. tion for genetic traits. Authors: Sven KnueppelDMwR Functions and data for “Data Mining with and Klaus Rohde. R”. Author: Luis Torgo. HumMeth27QCReport Quality control and prepro- cessing of Illumina’s Inﬁnium HumanMethy-DNAtools Tools for analysing forensic genetic DNA lation27 BeadChip methylation assay. Author: data. Authors: Torben Tvedebrink and James F.M. Mancuso. Curran. IMIS Incremental Mixture Importance Sampling.DeducerExtras Additional dialogs and functions for Authors: Adrian Raftery, Le Bao. Deducer. Author: Ian Fellows. ImpactIV Identifying Causal Effect for Multi-DeducerPlugInScaling Reliability and factor analy- Component Intervention Using Instrumental sis plugin. Authors: Alberto Mirisola and Ian Variable Method. Author: Peng Ding. Fellows. IniStatR Initiation à la Statistique avec R. Authors:DiceView Plot methods for computer experiments Frederic Bertrand, Myriam Maumy-Bertrand. design and models. Authors: Yann Richet, LMERConvenienceFunctions An assortment of Yves Deville, Clement Chevalier. functions to facilitate modeling with linearEMA Easy Microarray data Analysis. Author: EMA mixed-effects regression (LMER). Author: An- group. toine Tremblay. LPCM Local principal curve methods. Authors:EquiNorm Normalize expression data using equiv- Jochen Einbeck and Ludger Evers. alently expressed genes. Authors: Li-Xuan Qin, Jaya M. Satagopan. LSD Lots of Superior Depictions. Authors: Bjoern Schwalb, Achim Tresch, Romain Francois.FAwR Functions and Datasets for “Forest Analytics with R”. Authors: Andrew Robinson and Jeff LVQTools Learning Vector Quantization Tools. Au- Hamann. thor: Sander Kelders.FeaLect Scores features for Feature seLection. Au- LogicForest Logic Forest. Author: Bethany Wolf. In thor: Habil Zare. view: MachineLearning. MARSS Multivariate Autoregressive State-SpaceFitARMA FitARMA: Fit ARMA or ARIMA using Modeling. Authors: Eli Holmes, Eric Ward, fast MLE algorithm. Author: A.I. McLeod. and Kellie Wills, NOAA, Seattle, USA. In view:FourierDescriptors Generate images using Fourier TimeSeries. descriptors. Author: John Myles White. MCLIME Simultaneous Estimation of the Regres- sion Coefﬁcients and Precision Matrix. Au-GAD General ANOVA Design (GAD): Analysis of thors: T. Tony Cai, Hongzhe Li, Weidong Liu variance from general principles. Author: and Jichun Xie. Leonardo Sandrini-Neto & Mauricio G. Ca- margo. MMST Datasets from “Modern Multivariate Statis- tical Techniques” by Alan Julian Izenman. Au-GPseq Using the generalized Poisson distribution thor: Keith Halbert. to model sequence read counts from high throughput sequencing experiments. Authors: MOCCA Multi-objective optimization for collecting Sudeep Srivastava, Liang Chen. cluster alternatives. Author: Johann Kraus.The R Journal Vol. 2/2, December 2010 ISSN 2073-4859
92.
92 N EWS AND N OTESMSToolkit The MSToolkit library for clinical trial PhysicalActivity Process Physical Activity Ac- design. Authors: Mango Solutions & Pﬁzer. celerometer Data. Authors: Leena Choi, Zhouwen Liu, Charles E. Matthews, and Ma-MatrixModels Modelling with Sparse And Dense ciej S. Buchowski. Matrices. Authors: Douglas Bates and Martin Maechler. PoMoS Polynomial (ordinary differential equation) Model Search. Authors: Mangiarotti S.,MeDiChI MeDiChI ChIP-chip deconvolution li- Coudret R., Drapeau L. brary. Author: David J Reiss. ProjectTemplate Automates the creation of new sta-MetabolAnalyze Probabilistic latent variable mod- tistical analysis projects. Author: John Myles els for metabolomic data. Authors: Nyamun- White. danda Gift, Isobel Claire Gormley and Lorraine QSARdata Quantitative Structure Activity Rela- Brennan. tionship (QSAR) Data Sets. Author: Max Kuhn.Modalclust Hierarchical Modal Clustering. Au- QuACN Quantitative Analysis of Complex Net- thors: Surajit Ray and Yansong Cheng. works. Author: Laurin Mueller.MsatAllele Visualizes the scoring and binning of R4dfp 4dfp MRI Image Read and Write Routines. microsatellite fragment sizes. Author: Filipe Author: Kevin P. Barry with contributions from Alberto. Avi Z. Snyder.NCBI2R Navigate and annotate genes and SNPs. RBrownie Continuous and discrete ancestral char- Author: Scott Melville. acter reconstruction and evolutionary rate tests. Authors: J. Conrad Stack, Brian O’Meara,NetCluster Clustering for networks. Authors: Mike Luke Harmon. Nowak, Solomon Messing, Sean J Westwood, and Dan McFarland. RGCCA Regularized Generalized Canonical Corre- lation Analysis. Author: Arthur Tenenhaus.NetData Network Data for McFarland’s SNA R labs. Authors: Mike Nowak, Sean J Westwood, RGtk2Extras Data frame editor and dialog making Solomon Messing, and Dan McFarland. wrapper for RGtk2. Authors: Tom Taverner, with contributions from John Verzani and IagoNetworkAnalysis Statistical inference on popula- Conde. tions of weighted or unweighted networks. Author: Cedric E Ginestet. RJSONIO Serialize R objects to JSON, JavaScript Object Notation. Author: Duncan TempleORDER2PARENT Estimate parent distributions Lang. with data of several order statistics. Author: RMC Functions for ﬁtting Markov models. Author: Cheng Chou. Scott D. Foster.OSACC Ordered Subset Analysis for Case-Control RNCBI The java ncbi interface to R. Author: Martin Studies. Authors: Xuejun Qin, Elizabeth R. Schumann. Hauser, Silke Schmidt (Center for Human Ge- netics, Duke University). RNCBIAxis2Libs Axis2 libraries for use in the R en- vironment. Author: Martin Schumann.OrgMassSpecR Organic Mass Spectrometry. Au- thors: Nathan G. Dodder, Southern Califor- RNCBIEUtilsLibs EUtils libraries for use in the R nia Coastal Water Research Project (SCCWRP). environment. Author: Martin Schumann. Contributions from Katharine M. Mullen. In RNiftyReg Medical image registration using the view: ChemPhys. NiftyReg library. Authors: Jon Clayden; basedPERregress Regression Functions and Datasets. Au- on original code by Marc Modat and Pankaj thor: Peter Rossi. Daga. In view: MedicalImaging. RSNNS Neural Networks in R using the StuttgartPKreport A reporting pipeline for checking popula- Neural Network Simulator (SNNS). Author: tion pharmacokinetic model assumption. Au- Christoph Bergmeir. In view: MachineLearning. thor: Xiaoyong Sun. RSearchYJ Search with Yahoo Japan. Author: YoheiPermAlgo Permutational algorithm to simulate sur- Sato. vival data. Authors: Marie-Pierre Sylvestre, Thad Edens, Todd MacKenzie, Michal Abra- RWebMA An R interface toWebMA of Yahoo Japan. hamowicz. Author: Yohei Sato.The R Journal Vol. 2/2, December 2010 ISSN 2073-4859
93.
N EWS AND N OTES 93RWekajars R/Weka interface jars. Author: Kurt SamplerCompare A framework for comparing the Hornik. performance of MCMC samplers. Author: Madeleine Thompson.RandForestGUI Authors: Rory Michelland, Genevieve Grundmann. SlimPLS SlimPLS multivariate feature selection. Author: Michael Gutkin.RcmdrPlugin.EHESsampling Tools for sampling in European Health Examination Surveys SortableHTMLTables Turns a data frame into an (EHES). Author: Statistics Norway. HTML ﬁle containing a sortable table. Author: John Myles White.RcmdrPlugin.SensoMineR Graphical User Inter- face for SensoMineR. Authors: F. Husson, J. TEQR Target Equavelence Range Design. Author: Josse, S. Le. M. Suzette Blanchard.RcmdrPlugin.TextMining Rcommander plugin for TRIANGG General discrete triangular distribution. tm package. Author: Dzemil Lusija. Authors: Tristan Senga Kiessé, Francial G. Libengué, Silvio S. Zocchi, Célestin C. Koko-RcppGSL Rcpp integration for GNU GSL vectors nendji. and matrices. Authors: Romain Francois and Dirk Eddelbuettel. TSAgg Time series Aggregation. Author: JS Lessels. In view: TimeSeries.Rd2roxygen Convert Rd to roxygen documentation. Authors: Hadley Wickham and Yihui Xie. ThreeGroups ML Estimator for Baseline-Placebo- Treatment (Three-group) designs. Author:Records Record Values and Record Times. Author: Holger L. Kern. Magdalena Chrapek. TilePlot This package performs various analyses ofRenext Renewal method for extreme values extrap- functional gene tiling DNA microarrays for olation. Authors: Yves Deville, IRSN. studying complex microbial communities. Au- thor: Ian Marshall.Rﬁt Rank Estimation for Linear Models. Author: John Kloke. TreePar Estimating diversiﬁcation rate changes in phylogenies. Author: Tanja Stadler.Rramas Matrix population models. Author: Marcelino de la Cruz. TunePareto Multi-objective parameter tuning for classiﬁers. Authors: Christoph Muessel, HansRuniversal Convert R objects to Java variables and Kestler. XML. Author: Mehmet Hakan Satman. VecStatGraphs2D Vector analysis using graphicalRunuranGUI A GUI for the UNU.RAN random and analytical methods in 2D. Authors: Juan variate generators. Author: Josef Leydold. Carlos Ruiz Cuetos, Maria Eugenia Polo Gar-SAPP Statistical Analysis of Point Processes. Au- cia, Pablo Garcia Rodriguez. thor: The Institute of Statistical Mathematics. VecStatGraphs3D Vector analysis using graphicalSE.IGE Standard errors of estimated genetic param- and analytical methods in 3D. Authors: Juan etes. Author: Piter Bijma. Carlos Ruiz Cuetos, Maria Eugenia Polo Gar- cia, Pablo Garcia Rodriguez.SII Calculate ANSI S3.5-1997 Speech Intelligibility Index. Author: Gregory R. Warnes. WDI Search and download data from the World Bank’s World Development Indicators. Au-SMCP Smoothed minimax concave penalization thor: Vincent Arel-Bundock. (SMCP) method for genome-wide association studies. Author: Jin (Gordon) Liu. WGCNA Weighted Gene Co-Expression Network Analysis. Authors: Peter Langfelder andSPOT Sequential Parameter Optimization. Authors: Steve Horvath with contributions by Jun Dong, T. Bartz-Beielstein with contributions from J. Jeremy Miller, Lin Song, Andy Yip, and Bin Ziegenhirt, W. Konen, O. Flasch, P. Koch, M. Zhang. Zaefferer. WMTregions Exact calculation of WMTR. Author:SQUAREM Squared extrapolation methods for ac- Pavel Bazovkin. celerating ﬁxed-point iterations. Author: Ravi Varadhan. abc Functions to perform Approximate Bayesian Computation (ABC) using simulated data. Au-SSSR Server Side Scripting with R. Author: Mehmet thors: Katalin Csillery, with contributions from Hakan Satman. Michael Blum and Olivier Francois.The R Journal Vol. 2/2, December 2010 ISSN 2073-4859
94.
94 N EWS AND N OTESadaptivetau Tau-leaping stochastic simulation. Au- cMonkey cMonkey intgrated biclustering algo- thor: Philip Johnson. rithm. Authors: David J Reiss, Institute for Systems Biology.alabama Constrained nonlinear optimization. Au- thor: Ravi Varadhan (with contributions from care CAR Estimation, Variable Selection, and Re- Gabor Grothendieck). gression. Authors: Verena Zuber and Kor- binian Strimmer.allan Automated Large Linear Analysis Node. Au- thor: Alan Lee. caspar Clustered and sparse regression (CaSpaR). Author: Daniel Percival.andrews Andrews curves. Author: Jaroslav Mys- livec. catR Procedures to generate IRT adaptive tests (CAT). Authors: David Magis (U Liege, Bel-aratio Assessment ratio analysis. Author: Daniel gium), Gilles Raiche (UQAM, Canada). In McMillen. view: Psychometrics.ares Environment air pollution epidemiology: a li- cccd Class Cover Catch Digraphs. Author: David J. brary for time series analysis. Authors: Wash- Marchette. ington Junger and Antonio Ponce de Leon. censReg Censored Regression (Tobit) Models. Au-aster2 Aster Models. Author: Charles J. Geyer. In thor: Arne Henningsen. In view: Survival. view: Survival. cgdsr R-Based API for accessing the MSKCC CancerbayesDem Graphical User Interface for bayesTFR. Genomics Data Server (CGDS). Author: An- Author: Hana Sevcikova. ders Jacobsen.bayesTFR Bayesian Fertility Projection. Authors: charlson Converts listwise icd9 data into comorbid- Hana Sevcikova, Leontine Alkema, Adrian ity count and Charlson Index. Author: Vanessa Raftery. Cox.bbemkr Bayesian bandwidth estimation for multi- cherry Multiple testing methods for exploratory re- variate kernel regression with Gaussian error. search. Author: Jelle Goeman. Authors: Han Lin Shang and Xibin Zhang. clustsig Signiﬁcant Cluster Analysis. Authors: Dou-beadarrayMSV Analysis of Illumina BeadArray glas Whitaker and Mary Christman. SNP data including MSV markers. Author: colcor Tests for column correlation in the presence Lars Gidskehaug. of row correlation. Author: Omkar Muralidha-beeswarm The bee swarm plot, an alternative to ran. stripchart. Author: Aron Charles Eklund. compareGroups Descriptives by groups. Authors:belief Contains basic functions to manipulate be- Héctor Sanz, Isaac Subirana, Joan Vila. lief functions and associated mass assignments. constrainedKriging Constrained, covariance- Authors: N. Maillet, B. Charnomordic, S. matching constrained and universal point or Destercke. block kriging. Author: Christoph Hofer. Inbenchmark Benchmark Experiments Toolbox. Au- view: Spatial. thor: Manuel J. A. Eugster. corrsieve CorrSieve. Author: Michael G. Campana.ber Batch Effects Removal. Author: Marco Giordan. costat Time series costationarity determination and tests of stationarity. Authors: Guy Nason andbfp Bayesian Fractional Polynomials. Author: Alessandro Cardinali. Daniel Sabanes Bove. crossmatch The cross-match test. Authors: Ruthbild BInary Longitudinal Data. Authors: M. Helena Heller, Dylan Small, Paul Rosenbaum. Gonçalves, M. Salomé Cabral and Adelchi Az- zalini; incorporates Fortran-77 code written by cudaBayesregData Data sets for the examples used R. Piessens and E. de Doncker in ‘Quadpack’. in the package cudaBayesreg. Author: Adelino Ferreira da Silva.bisoreg Bayesian Isotonic Regression with Bernstein Polynomials. cumSeg Change point detection in genomic se- quences. Author: Vito M.R. Muggeo.bivarRIpower Sample size calculations for bivariate longitudinal data. Authors: W. Scott Comulada curvHDR curvHDR ﬁltering. Authors: George and Robert E. Weiss. Luta, Ulrike Naumann and Matt Wand.The R Journal Vol. 2/2, December 2010 ISSN 2073-4859
95.
N EWS AND N OTES 95cxxPack R/C++ Tools for Literate Statistical Prac- expm Matrix exponential. Authors: Vincent Goulet, tice. Author: Dominick Samperi. In view: Re- Christophe Dutang, Martin Maechler, David producibleResearch. Firth, Marina Shapira, Michael Stadelmann.dafs Data analysis for forensic scientists. Authors: fANCOVA Nonparametric Analysis of Covariance. James Curran, Danny Chang. Author: Xiao-Feng Wang.dcv Conventional Cross-validation statistics for factorQR Bayesian quantile regression factor mod- climate-growth model. Author: Zongshan Li els. Author: Lane Burgette. with contributions from Jinlong Zhang. fastR Data sets and utilities for Foundations andddepn Dynamical Deterministic Effects Propagation Applications of Statistics by R Pruim. Author: Networks: Infer signalling networks for time- Randall Pruim. course RPPA data. Author: Christian Bender. fmsb Functions for medical statistics book withdelftfews delftfews R extensions. Authors: Mario some demographic data. Author: Minato Frasca, Michèl van Leeuwen. Nakazawa.demography Forecasting mortality, fertility, migra- futile.paradigm A framework for working in a func- tion and population data. Authors: Rob J Hyn- tional programming paradigm in R. Author: dman with contributions from Heather Booth, Brian Lee Yung Rowe. Leonie Tickle and John Maindonald. fwdmsa Forward search for Mokken scale analysis. Author: Wobbe P. Zijlstra.dicionariosIBGE Dictionaries for reading survey microdata from IBGE. Authors: Erick Fonseca, gMCP A graphical approach to sequentially rejec- Alexandre Rademaker. tive multiple test procedures. Author: Kor- nelius Rohmeyer.directlabels Direct labels for plots. Author: Toby Dylan Hocking. galts Genetic algorithms and C-steps based LTS es- timation. Author: Mehmet Hakan Satman.discretization Data preprocessing, discretization for classiﬁcation. Authors: Kim, H. J. games Statistical Estimation of Game-Theoretic Models. Authors: Curtis S. Signorino anddismo Species distribution modeling. Authors: Brenton Kenkel. Robert Hijmans, Steven Phillips, John Leath- wick and Jane Elith. gaussDiff Difference measures for multivariate Gaussian probability density functions. Au-distory Distance Between Phylogenetic Histories. thor: Henning Rust. Authors: John Chakerian and Susan Holmes. gcbd GPU/CPU Benchmarking in Debian-baseddivisors Find the divisors of a number. Author: systems. Author: Dirk Eddelbuettel. In view: Greg Hirson. HighPerformanceComputing.dixon Nearest Neighbour Contingency Table Anal- genepi Genetic Epidemiology Design and Inference. ysis. Authors: Marcelino de la Cruz Rot and Author: Venkatraman E. Seshan. Philip M. Dixon. geofd Spatial prediction for function value data.dpa Dynamic Path Approach. Author: Emile Chap- Authors: Ramon Giraldo, Pedro Delicado, pin. Jorge Mateu.eVenn A powerful tool to compare lists and draw glmpathcr Fit a penalized continuation ratio model Venn diagrams. Author: Nicolas Cagnard. for predicting an ordinal response. Author: Kellie J. Archer.ecoreg Ecological regression using aggregate and in- dividual data. Author: Christopher Jackson. glrt Generalized Logrank Tests for Interval-censored Failure Time Data. Authors: Qiang Zhao andegarch EGARCH simulation and ﬁtting. Author: Jianguo Sun. In view: Survival. Kerstin Konnerth. googleVis Using the Google Visualisation API withemg Exponentially Modiﬁed Gaussian (EMG) Dis- R. Authors: Markus Gesmann, Diego de tribution. Author: Shawn Garbett. Castillo.evaluate Parsing and evaluation tools that provide gptk Gaussian Processes Tool-Kit. Authors: Alfredo more details than the default. Author: Hadley A. Kalaitzis, Pei Gao, Antti Honkela, Neil D. Wickham. Lawrence.The R Journal Vol. 2/2, December 2010 ISSN 2073-4859
96.
96 N EWS AND N OTESgridExtra functions in Grid graphics. Author: Bap- isva Independent Surrogate Variable Analysis. Au- tiste Auguie. thor: Andrew E Teschendorff.grt General Recognition Theory. Authors: The origi- iv Information Visualisation with UML and Graphs. nal Matlab toolbox by Leola A. Alfonso-Reese. Author: Charlotte Maia. R port, with several signiﬁcant modiﬁcations by Kazunaga Matsuki. labeling Axis Labeling. Author: Justin Talbot.gsc Generalised Shape Constraints. Author: Char- laeken Laeken indicators for measuring social cohe- lotte Maia. sion. Authors: Andreas Alfons, Josef Holzer and Matthias Templ. In view: OfﬁcialStatistics.helpr Help for R. Authors: Hadley Wickham and Barret Schloerke. landsat Radiometric and topographic correction of satellite imagery. Author: Sarah Goslee.hisemi Hierarchical Semiparametric Regression of Test Statistics. Authors: Long Qu, Dan Nettle- lessR Less Code, More Results. Authors: David W. ton, Jack Dekkers. Gerbing, School of Business Administration, Portland State University.hotspots Hot spots. Author: Anthony Darrouzet- Nardi. list Multivariate Statistical Analysis for the Item Count Technique. Authors: Graeme Blair, Ko-huge High-dimensional Undirected Graph Estima- suke Imai. tion. Authors: Tuo Zhao, Han Liu, Kathryn Roeder, John Lafferty, Larry Wasserman. log4r A simple logging system for R, based on log4j.hydroGOF Goodness-of-ﬁt functions for compari- Author: John Myles White. son of simulated and observed hydrological longpower Sample size calculations for longitudinal time series. Author: Mauricio Zambrano Bigia- data. Authors: Michael C. Donohue, Anthony rini. C. Gamst, Steven D. Edland.hydroTSM Time series management, analysis and lubridate Make dealing with dates a little easier. interpolation for hydrological modelling. Au- Authors: Garrett Grolemund, Hadley Wick- thor: Mauricio Zambrano-Bigiarini. ham.icaOcularCorrection Independent Components magma Matrix Algebra on GPU and Multicore Ar- Analysis (ICA) based eye-movement correc- chitectures. Author: Brian J Smith. In view: tion. Authors: Antoine Tremblay, IWK Health HighPerformanceComputing. Center.igraphtosonia Convert iGraph graps to SoNIA .son makesweave Literate Programming with Make and ﬁles. Author: Sean J Westwood. Sweave. Author: Charlotte Maia.imguR Share plots using the image hosting service maxLinear Conditional Samplings for Max-Linear imgur.com. Author: Aaron Statham. Models. Author: Yizao Wang.indicspecies Functions to assess the strength and mcmcplots Create Plots from MCMC Output. Au- signiﬁcance of relationship of species site thor: S. McKay Curtis. group associations. Authors: Miquel De Cac- mcproﬁle Multiple Contrast Proﬁles. Author: eres, Florian Jansen. Daniel Gerhard.inference Functions to extract inferential values of a ﬁtted model object. Author: Vinh Nguyen. mem Moving Epidemics Method R Package. Au- thor: Jose E. Lozano Alonso.infochimps An R wrapper for the infochimps.com API services. Author: Drew Conway. memoise Memoise functions. Author: Hadley Wickham.interactivity toggle R interactive mode. Authors: Jeffrey Horner, Jeroen Ooms. metatest Fit and test metaregression models. Au- thor: Hilde M. Huizenga & Ingmar Visser.ipdmeta Individual patient and Mixed-level Meta- analysis of Time-to-Event Outcomes. Author: mfr Minimal Free Resolutions of Graph Edge Ideals. S. Kovalchik. Author: David J. Marchette.isdals Provides data sets for “Introduction to Statis- mhurdle Estimation of models with limited depen- tical Data Analysis for the Life Sciences”. Au- dent variables. Authors: Fabrizio Carlevaro, thors: Claus Ekstrom and Helle Sorensen. Yves Croissant, Stephane Hoareau.The R Journal Vol. 2/2, December 2010 ISSN 2073-4859
97.
N EWS AND N OTES 97mixedQF Estimator with Qudratics Forms in neuRosim Functions to generate fMRI data includ- Mixeds Models. Authors: Jean-Benoist Leger, ing activated data, noise data and resting state ENS Cachan & Jean-Benoist Leger, INRA. data. Authors: Marijke Welvaert with contri- butions from Joke Durnez, Beatrijs Moerkerke,mixsep Forensic Genetics DNA Mixture Separation. Yves Rosseel and Geert Verdoolaege. Author: Torben Tvedebrink. nnc Nearest Neighbor Autocovariates. Authors: A.mixsmsn Fitting ﬁnite mixture of scale mixture of I. McLeod and M. S. Islam. skew-normal distributions. Authors: Marcos nncRda Improved RDA Using nnc. Authors: M. S. Prates, Victor Lachos and Celso Cabral. Islam and A. I. McLeod.mkssd Efﬁcient multi-level k-circulant supersatu- normwhn.test Normality and White Noise Testing. rated designs. Author: B N Mandal. Author: Peter Wickham.modelcf Modeling physical computer codes with npst A generalization of the nonparametric season- functional outputs using clustering and dimen- ality tests of Hewitt et al. (1971) and Rogerson sionality reduction. Author: Benjamin Auder. (1996). Author: Roland Rau.modelfree Model-free estimation of a psychometric openair Tools for the analysis of air pollution data. function. Authors: Ivan Marin-Franch, Kamila Authors: David Carslaw and Karl Ropkins. Zychaluk, and David H. Foster. operator.tools Utilities for working with R’s opera- tors. Author: Christopher Brown.mondate Keep track of dates in terms of months. Author: Dan Murphy. In view: TimeSeries. optimx A replacement and extension of the optim() function. Authors: John C Nash and Ravimpmcorrelogram Multivariate Partial Mantel Cor- Varadhan. In view: Optimization. relogram. Author: Marcelino de la Cruz. orQA Order Restricted Assessment Of Microar-mpt Multinomial Processing Tree (MPT) Models. ray Titration Experiments. Author: Florian Author: Florian Wickelmaier. Klinglmueller.mr Marginal regresson models for dependent data. orderbook Orderbook visualization/Charting soft- Authors: Guido Masarotto and Cristiano Varin. ware. Authors: Andrew Liu, Khanh Nguyen, Dave Kane. In view: Finance.mritc MRI tissue classiﬁcation. Authors: Dai Feng parfossil Parallelized functions for palaeoecologi- and Luke Tierney. In view: MedicalImaging. cal and palaeogeographical analysis. Author: Matthew Vavrek.mtsdi Multivariate time series data imputation. Au- thors: Washington Junger and Antonio Ponce partitionMap Partition Maps. Author: Nicolai de Leon. Meinshausen.multitaper Multitaper Spectral Analysis. Author: pathClass Classiﬁcation using biological pathways Karim Rahim. as prior knowledge. Author: Marc Johannes.mutoss Uniﬁed multiple testing procedures. pathmox Segmentation Trees in Partial Least Authors: MuToss Coding Team (Berlin Squares Path Modeling. Authors: Gaston 2010), Gilles Blanchard, Thorsten Dickhaus, Sanchez, and Tomas Aluja. Niklas Hack, Frank Konietschke, Kornelius pbapply Adding Progress Bar to ‘*apply’ Functions. Rohmeyer, Jonathan Rosenblatt, Marsel Scheer, Author: Peter Solymos. Wiebke Werft. pequod Moderated regression package. Author: Al-mutossGUI A graphical user interface for the Mu- berto Mirisola & Luciano Seta. Toss Project. Authors: MuToss Coding Team (Berlin 2010), Gilles Blanchard, Thorsten Dick- pesticides Analysis of single serving and compos- haus, Niklas Hack, Frank Konietschke, Kor- ite pesticide residue measurements. Author: nelius Rohmeyer, Jonathan Rosenblatt, Marsel David M Diez. Scheer, Wiebke Werft. pfda Paired Functional Data Analysis. Author: An- drew Redd.nacopula Nested Archimedean Copulas. Authors: Marius Hofert and Martin Maechler. In view: pglm panel generalized linear model. Author: Yves Distributions. Croissant.The R Journal Vol. 2/2, December 2010 ISSN 2073-4859
98.
98 N EWS AND N OTESpheatmap Pretty Heatmaps. Author: Raivo Kolde. rJython R interface to Python via Jython. Au- thors: G. Grothendieck and Carlos J. Gil Bel-phitest Nonparametric goodness-of-ﬁt methods losta (authors of Jython itself are Jim Hugunin, based on phi-divergences. Author: Leah R. Barry Warsaw, Samuele Pedroni, Brian Zim- Jager. mer, Frank Wierzbicki and others; Bob Ippolitophybase Basic functions for phylogenetic analysis. is the author of the simplejson Python module). Author: Liang Liu. In view: Phylogenetics. rTOFsPRO Time-of-ﬂight mass spectra signal pro-phylosim R packge for simulating biological se- cessing. Authors: Dariya Malyarenko, Mau- quence evolution. Authors: Botond Sipos, Gre- reen Tracy and William Cooke. gory Jordan. refund Regression with Functional Data. Authors: Philip Reiss and Lei Huang.phylotools Phylogenetic tools for ecologists. Au- thors: Jinlong Zhang, Xiangcheng Mi, Nancai relevent Relational Event Models. Author: Carter T. Pei. Butts.pi0 Estimating the proportion of true null hypothe- reportr A general message and error reporting sys- ses for FDR. Authors: Long Qu, Dan Nettleton, tem. Author: Jon Clayden. Jack Dekkers. reshape2 Flexibly reshape data: a reboot of the re-plotGoogleMaps Plot HTML output with Google shape package. Author: Hadley Wickham. Maps API and your own data. Author: Milan rmac Calculate RMAC or FMAC agreement coefﬁ- Kilibarda. cients. Author: Jennifer Kirk.plsRglm Partial least squares Regression for gen- rphast R interface to PHAST software for compar- eralized linear models. Authors: Frederic ative genomics. Authors: Melissa Hubisz, Bertrand, Nicolas Meyer, Myriam Maumy- Katherine Pollard, and Adam Siepel. Bertrand. In views: ChemPhys, Psychometrics. rpud R functions for computation on GPU. Author:pmr Probability Models for Ranking Data. Authors: Chi Yau. In view: HighPerformanceComputing. Paul H. Lee and Philip L. H. Yu. rrcovNA Scalable Robust Estimators with Highpolysat Tools for Polyploid Microsatellite Analysis. Breakdown Point for Incomplete Data. Author: Author: Lindsay V. Clark. Valentin Todorov. In view: OfﬁcialStatistics.portes Portmanteau Tests for ARMA, VARMA, rseedcalc Estimation of Proportion of GM Stacked ARCH, and FGN Models. Author: Esam Seeds in Seed Lots. Authors: Kevin Wright, Mahdi & A. Ian McLeod. Jean-Louis Laffont.ppstat Point Process Statistics. Author: Niels rvgtest Tools for Analyzing Non-Uniform Pseudo- Richard Hansen. Random Variate Generators. Authors: Josef Leydold and Sougata Chaudhuri.processdata Process Data. Author: Niels Richard Hansen. satin Functions for reading and displaying satel- lite data for oceanographic applications withpso Particle Swarm Optimization. Author: Claus R. Authors: Héctor Villalobos and Eduardo Bendtsen. González-Rodríguez.ptinpoly Point-In-Polyhedron Test (3D). Authors: schwartz97 A package on the Schwartz two-factor Jose M. Maisog, Yuan Wang, George Luta, Jian- commodity model. Authors: Philipp Erb, fei Liu. David Luethi, Juri Hinz, Simon Otziger. Inpvclass P-values for Classiﬁcation. Authors: Niki view: Finance. Zumbrunnen, Lutz Duembgen. sdcMicroGUI Graphical user interface for packagequalityTools Statistical Methods for Quality Sci- sdcMicro. Author: Matthias Templ. ence. Author: Thomas Roth. selectMeta Estimation weight functions in meta analysis. Author: Kaspar Ruﬁbach.rAverage Parameter estimation for the Averaging model of Information Integration Theory. Au- seqCBS CN Proﬁling using Sequencing and CBS. thors: Giulio Vidotto, Stefano Noventa, Davide Authors: Jeremy J. Shen, Nancy R. Zhang. Massidda, Marco Vicentini. seqRFLP Simulation and visualization of restrictionrDNA R Bindings for the Discourse Network Ana- enzyme cutting pattern from DNA sequences. lyzer. Author: Philip Leifeld. Authors: Qiong Ding, Jinlong Zhang.The R Journal Vol. 2/2, December 2010 ISSN 2073-4859
99.
N EWS AND N OTES 99sharx Models and Data Sets for the Study of Species- treemap Treemap visualization. Author: Martijn Area Relationships. Author: Peter Solymos. Tennekes.sigclust Statistical Signiﬁcance of Clustering. Au- triads Triad census for networks. Authors: Solomon thors: Hanwen Huang, Yufeng Liu & J. S. Mar- Messing, Sean J Westwood, Mike Nowak and ron. Dan McFarland.simexaft Authors: Juan Xiong, Wenqing He, Grace validator External and Internal Validation Indices. Y. Yi. In view: Survival. Author: Marcus Scherl.sinartra A simple web app framework for R, based violinmplot Combination of violin plot with mean on sinatra. Author: Hadley Wickham. and standard deviation. Author: Raphael W.smirnov Provides two taxonomic coefﬁcients from Majeed. E. S. Smirnov “Taxonomic analysis” (1969) wSVM Weighted SVM with boosting algorithm for book. Author: Alexey Shipunov (with help of improving accuracy. Authors: SungHwan Kim Eugenij Altshuler). and Soo-Heang Eo.smoothmest Smoothed M-estimators for 1- dimensional location. Author: Christian Hen- weirs A Hydraulics Package to Compute Open- nig. Channel Flow over Weirs. Author: William Asquith.soil.spec Soil spectral data exploration and regres- sion functions. Author: Thomas Terhoeven- wild1 R Tools for Wildlife Research and Manage- Urselmans. ment. Authors: Glen A. Sargeant, USGS North- ern Prairie Wildlife Research Center.soiltexture Functions for soil texture plot, classiﬁ- cation and transformation. Authors: Julien wvioplot Weighted violin plot. Author: Solomon MOEYS, contributions from Wei Shangguan. Messing.someMTP Functions for Multiplicty Correction and xpose4 Xpose 4. Authors: E. Niclas Jonsson, An- Multiple Testing. Author: Livio Finos. drew Hooker and Mats Karlsson.spacetime classes and methods for spatio-temporal xpose4classic Xpose 4 Classic Menu System. Au- data. Author: Edzer Pebesma. thors: E. Niclas Jonsson, Andrew Hooker andsplinesurv Nonparametric bayesian survival anal- Mats Karlsson. ysis. Authors: Emmanuel Sharef, Robert L. xpose4data Xpose 4 Data Functions Package. Au- Strawderman, and David Ruppert. In view: thors: E. Niclas Jonsson, Andrew Hooker and Survival. Mats Karlsson.sprint Simple Parallel R INTerface. Author: Univer- xpose4generic Xpose 4 Generic Functions Package. sity of Edinburgh SPRINT Team. Authors: E. Niclas Jonsson, Andrew HookersurvJamda Survival prediction by joint analysis of and Mats Karlsson. microarray gene expression data. Author: Haleh Yasrebi. xpose4speciﬁc Xpose 4 Speciﬁc Functions Package. Authors: E. Niclas Jonsson, Andrew HookersurvJamda.data Author: Haleh Yasrebi. and Mats Karlsson.synchronicity Boost mutex functionality for R. Au- thor: Michael J. Kane. Other changestableplot Represents tables as semi-graphic dis- plays. Authors: Ernest Kwan and Michael • The following packages were moved to the Friendly. Archive: AIS, BayesPanel, BootCL, DEA, FKBL, GRASS, MSVAR, NADA, PhySim,tfer Forensic Glass Transfer Probabilities. Authors: RGtk2DfEdit, Rfwdmv, Rlsf, SNPMaP, James Curran and TingYu Huang. SharedHT2, TwslmSpikeWeight, VDCutil, VaR, XReg, aaMI, asuR, bayesCGH, bicre-thgenetics Genetic Rare Variants Tests. Author: duc, bim, birch, celsius, csampling, cts, Thomas Hoffmann. demogR, dprep, glmc, grasp, grnnR, ifa,tpsDesign Design and analysis of two-phase sam- ig, inetwork, ipptoolbox, knnTree, lnMLE, pling and case-control studies. Authors: locfdr, logilasso, lvplot, marg, mlCopulaS- Takumi Saegusa, Sebastien Haneuse, Nilanjan election, mmlcr, netmodels, normwn.test, Chaterjee, Norman Breslow. npde, nytR, ofw, oosp, pga, pinktoe, ppc,The R Journal Vol. 2/2, December 2010 ISSN 2073-4859
100.
100 N EWS AND N OTES qgen, risksetROC, rjacobi, rv, sabreR, single- Kurt Hornik case, smd.and.more, stochasticGEM, titecrm, WU Wirtschaftsuniversität Wien, Austria and udunits. Kurt.Hornik@R-project.org • The following packages were resurrected from Achim Zeileis the Archive: MIfuns, agsemisc, assist, gllm, Universität Innsbruck, Austria mlmmm, mvgraph, nlts, rgr, and supclust. Achim.Zeileis@R-project.org • The following packages were removed from CRAN: seaﬂow and simpleR.The R Journal Vol. 2/2, December 2010 ISSN 2073-4859
101.
N EWS AND N OTES 101News from the Bioconductor Projectby the Bioconductor Team particularly valuable in providing stable annotationsProgram in Computational Biology for repeatable research.Fred Hutchinson Cancer Research Center Several developments in packages maintained by the Bioconductor core team are noteworthy. TheWe are pleased to announce Bioconductor 2.7, re- graphBAM class in the graph package is available toleased on October 18, 2010. Bioconductor 2.7 is manipulate very large graphs. The GenomicRanges,compatible with R 2.12.0, and consists of 419 pack- GenomicFeatures, and Biostrings packages have en-ages. There are 34 new packages, and enhancements hanced classes such as TranscriptDb for representingto many others. Explore Bioconductor at http:// genome-scale ‘track’ annotations from common databioconductor.org, and install packages with resources, MultipleAlignment for manipulating refer- ence (and other moderate-length) sequences in a mi-> source("http://bioconductor.org/biocLite.R") crobiome project, and SummarizedExperiment to col-> biocLite() # install standard packages... late range-based count data across samples in se-> biocLite("IRanges") # ...or IRanges quence experiments. The chipseq package has en- hanced functionality for peak calling, and has beenNew and revised packages updated to use current data structures. Further information on new and existing pack-This release includes new packages for diverse areas ages can be found on the Bioconductor web site,of high-throughput analysis. Highlights include: which contains ‘views’ that identify coherent groups of packages. The views link to on-line packageNext-generation sequencing packages for ChIP descriptions, vignettes, reference manuals, and use (iSeq, RMAPPER), methylated DNA im- statistics. munoprecipitation (MEDIPS), and RNA-seq (rnaSeqMap) work ﬂows, 454 sequencing (R453Plus1Toolbox) and management of mi- Other activities crobial sequences (OTUbase). The Bioconductor community met on July 28-30Microarray analysis of domain-speciﬁc applica- at our annual conference in Seattle for a combina- tions (array CGH, ADaCGH2; tiling arrays, tion of scientiﬁc talks and hands-on tutorials, and les; miRNA, LVSmiRNA; and bead arrays, on November 17-18 in Heidelberg, Germany for MBCB); specialized statistical methods (fabia, a meeting highlight contributions from the Euro- farms, RDRToolbox), and graphical tools (Iso- pean developer community. The active Bioconduc- GeneGUI). tor mailing lists (http://bioconductor.org/docs/Gene set, network, and graph oriented approaches mailList.html) connect users with each other, to do- and tools include gage, HTSanalyzeR, Patient- main experts, and to maintainers eager to ensure GeneSets, BioNet, netresponse, attract, Co- that their packages satisfy the needs of leading edge GAPS, ontoCAT, DEgraph, NTW, and RCy- approaches. Bioconductor package maintainers and toscape. the Bioconductor team invest considerable effort in producing high-quality software. The BioconductorAdvanced statistical and modeling implementations team continues to ensure quality software through relevant to high-throughtput genetic analysis technical and scientiﬁc reviews of new packages, and include BHC (Bayesian Hierarchical Cluster- daily builds of released packages on Linux, Win- ing), CGEN (case-control studies in genetic dows, and Macintosh platforms. epidemiology), and SQUADD.Image, cell-based, and other assay packages, in- clude imageHTS, CRImage, coRNAi, Looking forward GeneGA, NuPoP. Contributions from the Bioconductor community Our large collection of microarray- and organism- play an important role in shaping each release. Wespeciﬁc annotation packages have been updated to anticipate continued efforts to provide statisticallyinclude information current at the time of the Biocon- informed analysis of next generation sequence data,ductor release. These annotation packages contain especially in the down-stream analysis of compre-biological information about microarray probes and hensive, designed sequencing experiments and inte-the genes they are meant to interrogate, or contain grative analyses. The next release cycle promises togene-based annotations of whole genomes. They are be one of active scientiﬁc growth and exploration.The R Journal Vol. 2/2, December 2010 ISSN 2073-4859
102.
102 N EWS AND N OTESR Foundation Newsby Kurt Hornik New supporting members Ian M. Cook, USADonations and new members Chris Evans, UK Martin Fewsow, GermanyDonations Laurence Frank, NetherlandsAgrocampus Ouest, France Susan Gruber, USAHelsana Versicherungen AG, Switzerland Robert A. Muenchen, USAKristian Kieselman, Sweden Geoff Potvin, USA Ivo Welch, USA Alex Zolot, USANew benefactorseoda, Germany Kurt HornikGiannasca Corrado & Montagna Maria, Italy WU Wirtschaftsuniversität Wien, Austria Kurt.Hornik@R-project.orgNew supporting institutionsMarine Scotland Science, UKThe R Journal Vol. 2/2, December 2010 ISSN 2073-4859
Views
Actions
Embeds 0
Report content