Mixedsde: a R package to fit mixed stochastic differential equations

Stochastic diﬀerential equations (SDEs) are useful to model continuous stochastic processes. When (independent) repeated temporal data are available, variability between the trajectories can be modeled by introducing random eﬀects in the drift of the SDEs. These models are useful to analyse neuronal data, crack length data, pharmacokinetics, ﬁnancial data, to cite some applications among other. The R package focuses on the estimation of SDEs with linear random eﬀects in the drift. The goal is to estimate the common density of the random eﬀects from repeated discrete observations of the SDE. The package mixedsde proposes three estimation methods: a Bayesian parametric, a frequentist parametric and a frequentist nonparametric method. The three procedures are described as well as the main functions of the package. Illustrations are presented on simulated and real data.


Introduction
Continuous stochastic processes are usually observed discretely in time (with equidistant time points or not) leading to times series, although their intrinsic nature is of continuous time.While discrete time stochastic models such as auto-regressive models (ARMA, GARCH, ...) have been widely developed for time series with equidistant times, more and more attention have been focused on Stochastic Differential Equations (SDEs).Examples of applications where SDEs have been used include dynamics of thermal systems (Bacher et al., 2011), solar and wind power forecasting (Iversen et al., 2014), neuronal dynamics (Ditlevsen and Samson, 2014), pharmacokinetic/pharmacodynamic (PK/PD) (Hansen et al., 2014), crack growth (Hermann et al., 2016).
Depending on the applications, independent repeated temporal measures might be available.For examples, drug concentration of several subjects is usually observed in PK; dynamics of several neurons is measured along time; time to crack lengths can be measured repeatedly in crack growth study.Each trajectory represents the behaviour of a unit/subject.The functional form is similar for all the trajectories.Fitting the overall data simultaneously obviously improves the quality of estimation, but one has to take into account these variabilities between experiments.This is the typical framework of mixed-effects models where some parameters are considered as random variables (random effects) and proper to each trajectory.Hence the random effects represent the particularity of each subject.Some parameters can also be considered as common to all the trajectories (fixed effects).

1
In this work the model of interest is thus a mixed-effects stochastic differential equation (MSDE), mixed-effects for both fixed and random effects.The mixedsde package has been developed to estimate the density of the random effects from the discrete observations of M independent trajectories of a MSDE.It is available from https://github.com/charlottedion/mixedsde.
More precisely, we focus on MSDE with linear drift.We consider M diffusion processes (X j (t), t ≥ 0), j = 1, . . ., M with dynamics ruled by SDE, for t ∈ [0, T ] dX j (t) = (α j − β j X j (t))dt + σa(X j (t))dW j (t) X j (0) = x j (1) where (W j ) 1...j...M are M independent Wiener processes, (α j , β j ) are two (random) parameters, σa(X j (•)) is the diffusion coefficient with a a known function and σ an unknown constant.The initial condition x j is assumed fixed (and known) in the paper with possibly different values for each trajectory.
In the package, we restrict the models to the two famous SDEs with linear drift, namely the Ornstein-Uhlenbeck model (OU) with a(x) = 1 and the Cox-Ingersoll-Ross model (CIR) with a(x) = √ x.For the CIR model, we assume that x j > 0, σ > 0, α j > σ 2 /2 and β j > 0 to ensure that the process never crosses zero.
The random parameters are denoted φ j and belong to R d with either d = 1 or d = 2: • (d = 1) φ j = α j random and for all j = 1, . . ., M , β j = β fixed • (d = 1) φ j = β j random and for all j = 1, . . ., M , α j = α fixed The φ j 's are assumed independent and identically distributed (i.i.d.) and independent of the W j 's.The mixedsde package aims at estimating the random effects φ j and their distribution whose density is denoted f , from N discrete observations of the M trajectories (X j (t)) j from equation (1) at discrete times t 0 = 0 < t 1 < . . .< t N = T (not necessarily equidistant).To the best of our knowledge, this is the first package in R language dedicated to the estimation of MSDE.Estimation procedures for MSDE have been proposed in the non-parametric and the parametric frameworks, with a frequentist and a Bayesian point of view.The parametric approaches assume Gaussian random effects φ j .Among other references, for parametric maximum likelihood estimation, we can cite Ditlevsen and de Gaetano (2005) Three estimation procedures are implemented in the mixedsde package: a kernel nonparametric estimator (Dion and Genon-Catalot, 2015), a parametric maximum likelihood estimator (Delattre et al., 2013) and a parametric Bayesian estimator (Hermann et al., 2016).The parametric frequentist and Bayesian approaches assume the random effects Gaussian.The Bayesian approach seems the most appropriate method for a small time of observation T and a small number of trajectories M .The nonparametric approach can be used when no prior idea on the density is available and when T and M are both large enough.Finally, the parametric frequentist estimation can be used when data are symmetric with a large number of discrete observations.This paper reviews in Section 2 the three estimation methods.An overview of the mixedsde package is given in Section 3 through a description of the main functions and of other related companion functions.The practical use of this package is illustrated in Section 4 on simulated data and in Section 5 on one real dataset in neuronal modelling.

Density estimation in mixed stochastic differential models
We briefly recall the methodology of the three estimators implemented in the mixedsde package.We start with the nonparametric approach, then the frequentist parametric Gaussian method and finally the Bayesian parametric Gaussian method.

Nonparametric estimation of the random effects density
The first step of the nonparametric approach is to estimate the random effects.The idea is to maximize the likelihood of the process X ϕ j solution of the stochastic differential equation with fixed ϕ.Assuming continuous observations of (X j (t), 0 ≤ t ≤ T ), the likelihood function is obtained with the Girsanov formula: ds .
Maximizing the likelihood yields to the following estimator of φ j where U j and V j are the two sufficient statistics of the model.They are explicit depending on the form of the random effects: • α j random and β known • β j random and α known • (α j , β j ) random, denote b(x) = (1, −x) t with u t the transposition of vector u.Here U j is a column vector with size 2 × 1 and V j = (V j,k, ) k, ∈{1,2} a 2 × 2 symmetric matrix: Truncated versions of this estimator have been introduced for theoretical reasons.In the bidimensional case φ j = (α j , β j ), Dion and Genon-Catalot (2015) propose the following estimator with I 2 the 2×2 identity matrix and λ i,j , i = 1, 2 the two eigenvalues of the symmetric non negative matrix V j , and κ a numerical constant calibrated in practice.In the one-dimensional case φ j = β j with α = 0, Genon-Catalot and Larédo (2016) propose with κ a numerical constant calibrated in practice.
Based on these estimators of the φ j 's, we can proceed to the second step, the estimation of their density f .Several nonparametric estimators of f have been proposed (see Comte et al., 2013, for example).In the package mixedsde, we focus on the kernel estimator of f .Let us introduce the kernel function K : R d → R, with d = 1, 2 depending on the dimension of φ j .We assume K to be a C 2 function satisfying Note that in the bidimensional case, h = (h 1 , h 2 ) and the two marginal bandwidths are different.
The nonparametric estimator of the density f of φ j is and the estimator In the mixedsde package, Gaussian kernel estimators are implemented with the R-functions density (available in package stats) when d = 1 and kde2d (available in package MASS) when d = 2 with an automatic selection of the bandwidth h.Note that when there is only one random effect, the bandwidth is selected by cross-validation with the argument bw="ucv", or as the default value given by the rule-of-thumb if the chosen bandwidth is to small.Note that the estimator is unstable for small variance of the random effects.
It is important to notice that the two random effects are not assumed independent.When there is only one random effect, the fixed parameter has to be entered by the user.
The computation of A j = V −1 j U j does not require the knowledge of σ 2 as it appears both in U j and V j .It requires however the evaluation of the two continuous integrals in U j and V j while observing the trajectories (X j ) at discrete times (t 0 , t 1 , . . ., t N ).For ∆ k = t k+1 − t k , k = 0, . . ., N − 1, the non-stochastic integrals T 0 g(X j (s))ds for any function g are approximated by For the stochastic integrals, we use the following simple discretization

Frequentist parametric estimation approach
In this section and the following one, we assume that the random parameters φ j are Gaussian: 2 ).For the bidimensional case d = 2 we estimate by maximum likelihood the parameters θ := (µ, Ω).We define the likelihood function assuming first that the trajectories are continuously observed, similarly to the nonparametric approach (Section 2.1).Thanks to the Girsanov formula, the likelihood function of the j th trajectory X j is and I 2 is the 2 × 2 identity matrix.For the case d = 1, the parameters to estimate are θ := (µ, ω, ψ) where ψ denotes the fixed effect α or β.We adopt the subscript r for the value of random, equal to 1 or 2, and c for the position of the common fixed effect (thus 2 or 1).The likelihood function of the j th trajectory X j is with the notations U , V from (3).Details on this formula are available in the Appendix 6.The likelihood function is defined as L(θ) = M j=1 L(X j , θ).The maximum likelihood estimator θ := ( µ, Ω, ψ) when d = 1 and θ := ( µ, Ω) when d = 2 is defined by This estimator is not explicit.In the mixedsde package, the function optim is used to maximize numerically the likelihood.Initialization of optim is obtained by computing the mean and the variance of the estimators A j of the random parameters (see equation ( 2)).Sufficient statistics U j and V j are discretized as explained in Section 2.1.
Note that this parametric approach requires the knowledge of σ 2 to compute the sufficient statistics U j and V j because V j appears alone in R j .We plug the following estimator of σ 2 Selection of (non-nested) models can be performed with the BIC criteria, defined by −2 log L( θ)+ 2 log(M ) for model with one random effect and −2 log L( θ) + 4 log(M ) with two random effects and the AIC criteria defined by −2 log L( θ) + 2 for one random effect and −2 log L( θ) + 4 for two random effects.These asymptotic criteria indicate the trade-off between maximizing fit and minimizing model complexity.Note that their theoretical properties are guaranteed only when σ 2 is known.
We now introduce prior distributions implemented in mixedsde package for the parameters θ: where IG is the inverse Gamma distribution which is conjugate to the normal likelihood and m, V, α ω,i , β ω,i , α σ , β σ are hyperparameters fixed by the user.The case of only one random effect is nested by setting ω 2 1 or ω 2 2 equal to zero.The aim is to calculate the posterior distribution p(θ|X 1:M ) which is not explicit for the whole vector of parameters.Therefore, we simulate it through a Gibbs sampler (see e.g.Robert and Casella, 2004).Here, we have a true transition density of both processes that is used for the likelihood, see Iacus (2008).For a general hierarchical diffusion approach based on the Euler approximation, see Hermann et al. (2016).
Analogically to the frequentist approach, there is a first step: sample from the full conditional posterior of the random effects p(φ j |(X j (t k )) k=1,...,N , θ), j = 1, . . ., M .This is done by a Metropolis Hastings (MH) algorithm.
The last step of the Gibbs sampler is sampling from the full conditional posterior of σ 2 .For the CIR model, this is also conducted by a MH step.For the OU model, the inverse Gamma distribution is conjugate to the normal likelihood.The full conditional posterior distribution is given by In the case of one random effect, there is one additional Gibbs sampler step for the fixed effect, that is also conducted through a MH algorithm.
In the package, the starting values for the Gibbs sampler are set equal to the mean of the prior distributions.In all the MH algorithms, one each has to choose a proposal density.In the package mixedsde, we use a normal density for all location parameters with mean equal to the last chain iteration and a proposal variance that has to be chosen.For the CIR model, the proposal distribution for σ 2 is chosen by where σ 2 prev is the previous value of σ 2 .The remaining question is how to choose the suitable proposal variance.This variance controls the chain dependence and the acceptance rate.If the variance is small, the acceptance rate is large and the chains gets very dependent.If the proposal variance is large, only few candidates are accepted with the advantage of weakly dependent chains.This problem is solved in the package with an adaptive Metropolis-within Gibbs algorithm (Rosenthal , 2011) using the proposal distribution N (0, e 2l ) with l the logarithm of the standard deviation of the increment.This parameter is chosen so that the acceptance rate is approximately 0.44 which is proposed to be optimal in the Metropolis-within Gibbs sampler.It is proposed to add/subtract an adoption amount δ(n) = min(0.1,n −1/2 ) to/from t after every 50th iteration and adapt the proposal variance if the acceptance rate is smaller than 0.3 or larger than 0.6.

Predictions
In many cases, one is not only interested in parameter estimation but also in the prediction for future observations.The first step is the prediction of a future random effect φ pred .The simulation of a new random effect is direct for the frequentist parametric approach sampling from N ( µ, Ω).For the nonparametric approach, first note that f h is an estimator given on a discrete grid {x 1 , . . ., x n }, i.e. a vector of corresponding {p 1 , . . ., p n } after normalisation.Simulating from the estimator f h can therefore be performed simulating a discrete variable from vector {x 1 , . . ., x n } with (normalized) probabilities {p 1 , . . ., p n }.For the Bayesian approach, a new φ pred is sampled from the predictive distribution p(φ pred |X 1:M ) = p(φ pred |µ, Ω)p(µ, Ω|X 1:M ) d(µ, Ω) where the posterior of µ and Ω is approximated by the results of the Gibbs sampler.This distribution is not explicit, we propose to sample over a grid through inversion method, equal to the nonparametric case.
Given a new random effect φ pred , we are able to simulate predictive trajectories.This is performed using the transition density p(X(t k )|X(t k−1 ), φ pred , σ 2 ) for the frequentist approach.The starting points of the process x j are the observed ones.For the Bayesian approach, we implement two prediction settings.Firstly, analogously to the frequentist approach a new trajectory is simulated using the transition density p(X(t k )|X(t k−1 ), φ pred , σ 2 ) where φ pred is sampled from the MCMC posterior distribution p(φ|X 1:M ).Secondly, we can calculate the predictive distribution in each time point.We can then calculate only the quantiles for a prediction interval or to draw directly samples from the predictive distribution.For this predictive distribution, we take the starting point x j = x 0 to be the same for all series.If the starting points would vary, this is an additional random effect whose density has to be estimated.This is not implemented in the estimation procedure and will, therefore, left out for the prediction.
It is then interesting to compare the new trajectories to the real ones and if the number of new trajectories is large enough, to compute an empirical confidence interval.

Overview of the mixedsde functions
This Section presents an overview of the functions implemented in the package.Illustrations of the code are given in Section 4.

Data
Data is a matrix X of size M ×N for M trajectories with N time points, not necessarily equidistant but similar for the M trajectories.The vector of times is a vector times of length N .Real datasets are available on the package, and detailed on Section 5.
To lead a simulation study, the function mixedsde.simallows to generate a list with a M × N matrix X of M trajectories on the interval [0, T ] with N equidistant points (default value 100) and a vector times with the equidistant times.This function leans on function sde.sim available via package sde from the web address http://cran.r-project.org/package=sde, to simulate SDE.One has to choose: model either OU or CIR; random that fixes the position and the number of random effects: random = 1 for α j random, random = 2 for β j random or random = c(1,2) for α j and β j random; σ the diffusion coefficient; invariant, default value 0 means that X 0 is 0 (default) or fixed by the user, value 1 means that X 0 is generated from the invariant distribution (see details in the package documentation); density.phi to choose the distribution of the random effect (see package documentations).

Main function
Main function is mixedsde.fitproducing estimation of the random effects and their common density.Inputs of mixedsde.fitare • X a M × N matrix containing the trajectories by rows

• times the vector of observations times
• model the chosen model either OU or CIR • random that fixes the position and the number of random effects: random = 1 for α j random, random = 2 for β j random or random = c(1,2) for α j and β j random • estim.method the estimation method: nonparam (see Section 2.1), paramML (see Section 2.2) or paramBayes (see Section 2.3) • fixed the value of the fixed effect β (resp.α) when random = 1 (resp.random = 2), default 0. (Only for the frequentist approaches).
• estim.fix 1 if the fixed effect is estimated, default 0. (Only for the frequentist parametric approach when random=1 or 2).• gridf the x-axis grid on which the random effect distribution is computed: we recommend a fine grid with at least 200 points, default value is a sequence of length 500 starting in 0.8 × min j φ j and ending in 1.2 × max j φ j .(Only for the frequentist approaches) • prior the list of prior parameters m, v, alpha.omega,beta.omega,alpha.sigma,beta.sigma for paramBayes method: default values are calculated based on the estimations (A j ) j for the first min(3, M • 0.1 ) series and main estimation is only made with the remaining M • 0.9 .(Only for the Bayesian approach) • nMCMC length of the Markov chain for paramBayes method.(Only for the Bayesian approach) In the following we describe the related methods, proposed in the package, they are summarised in Table 1.

Outputs
Output of mixedsde.fit is a S4 class called Freq.fit for the frequentist approaches and Bayes.fit for the Bayesian approach.Results of the estimation procedure are available as a list applying function out to the Freq.fit(resp.Bayes.fit)object.Elements of Freq.fit are: • sigma2 estimator σ 2 of the diffusion coefficient • estimphi estimator (A j ) j of the random effects • estimphi.trunctruncated estimation ( A j ) j of the random effects • estim.fixedestimator of the fixed effect if random = 1 or 2, estim.method= paramML; estim.fix= 1, default 0 • gridf the x-axis grid on which the random effect distribution is computed • estimf estimator of the density of the random effects (for both paramML and nonparam methods) • cutoff binary M -vector of binary values indicated the truncated trajectories, default FALSE when no truncation • estimf.trunctruncated estimation of the density of the random effects • mu estimation of Gaussian mean of the random effects (only for paramML method) • omega estimation of Gaussian variance matrix of the random effects (only for paramML method) • aic and bic AIC and BIC criteria (only for paramML method) • index index of trajectories used for the estimation, excluded are trajectories with V j = 0 or V j = +∞ (one random effect) or det V = +∞ (two random effects), trajectories containing negative values for CIR model, Elements of Bayes.fitare: • sigma2 trace of the Markov chain simulated from the posterior of σ 2 • mu trace of the Markov chain simulated from the posterior of µ • omega trace of the Markov chain simulated from the posterior of ω 2 • alpha trace of the Markov chain simulated from the posterior of α j , nMCMC×M matrix if α is random effect, nMCMC×1 otherwise • beta trace of the Markov chain simulated from the posterior of β j , nMCMC×M matrix if β is random effect, nMCMC×1 otherwise • burnIn a proposal for the burn-in phase • thinning a proposal for the thin rate • ind.4.prior the indices used for the prior parameter calculation, M + 1 if prior parameters were specified.
Outputs burnIn and thinning are only proposals for a burn-in phase and a thin rate.The proposed burnIn is calculated by dividing the Markov chains into 10 blocks and calculate the 95% credibility intervals and the respective mean.Starting in the first one, the block is taken as burn-in as long as the mean of the current block is not in the credibility interval of the following block or vice versa.The thinning rate is proposed by the first lag which leads to a chain autocorrelation of less than 80%.It is not easy to automate these choices, so it is highly recommended by the authors to verify the chains manually.
Command plot() applied to a Freq.fitobject produces a frequencies histogram of (A j (T )) j (one or two according to the number of random effects) with the estimated density (red curve) and the truncated estimator if available (dotted grey red curve) and a quantile-quantile graph with the quantiles of the A j 's versus the quantiles of a normal sample of the same length, with the same empirical mean and standard deviation.This illustrates the normality of the sample.Applying this function to the nonparametric results indicates if the Gaussian assumption of the parametric approach is appropriate.When plot() is applied to a Bayes.fitobject, one can choose four different options, named style.The default value is chains, it plots the Markov chains for the different parameter values.acf leads to the corresponding autocorrelation functions, density to the approximated densities for each parameter and cred.intleads to the credibility intervals of the random parameters with the input parameter level with default 0.05.For all options, with the input parameter reduced = TRUE, the burn-in period is excluded and a thinning rate is taken, default is FALSE.There is also a possibility to include the prior means in the plots by lines with plot.priorMean= TRUE, default is FALSE.
In the Bayesian estimation the influence of prior parameters is interesting, thus for the Bayes.fitobject, there is a second plot method, named plot2compare where three estimation objects can be compared.For reasons of clarity, only the densities are compared, with the default reduced = TRUE.Here, there is also a possibility to include true.values, a list of the true parameters for the comparison in a simulation example.
Command summary() applied to a Freq.fitobject computes the kurtosis and the skewness of the distribution, σ 2 , the empirical mean and standard deviation computed from the estimators (A j ) j , µ, Ω (and the fixed effect α or β), AIC, BIC criteria for the frequentist MLE method.When applied to a Bayes.fitobject, it computes means and credibility interval (default level 95%) for each parameter (µ, Ω, σ, α, β).Here, there is also a possibility to choose the burn-in and the thinning rate manually by the input parameters burnIn and thinning.
Command print() applied to a Freq.fitobject returns the use or not of the cutoff and the vector of excluded trajectories.When applied to a Bayes.fitobject, it returns the acceptance rates of the MCMC procedure.

Validation methods
Validation of a mixed model, obtained with function valid, is an individual validation.Indeed, the validation of trajectory j is obtained comparing it to M new trajectories simulated with parameters (α, β) fixed to the estimator A j (or A j ) in the frequentist approaches and to the posterior means in the Bayesian approach.Inputs of the function are This is an empirical method.The recent work Kuelbs and Zinn (2015) on depth and quantile regions for stochastic processes (see for example Zuo and Serfling (2000) for depth functions definitions) should provide the theoretical context for a more extensive study.This could be done in further works.(and optional plot parameters).Function pred applied to a Freq.fitobject returns a list with predicted random effects phipred, predicted trajectories Xpred and indexes of the corresponding true trajectories indexpred (see Section 2.4 for details of simulation).If plot.pred= TRUE (default) three plots are produced.Left predicted random effects versus estimated random effects.Middle: true trajectories.Right predicted trajectories and their empirical 95% prediction intervals (default value level=0.05).The prediction can also be done from the truncated estimator f h based on the A j (5), if the argument pred.trunc= 1.

Prediction methods
Function pred applied to a Bayes.fitobject returns a S4 class object Bayes.pred.The first element of this class is Xpred, which depends on the input parameters.Including the input trajectories = TRUE, matrix Xpred contains the M drawn trajectories by rows (see first method described for the Bayesian approach in Section 2.4).Default is trajectories = FALSE which leads to the calculation of the predictive distribution explained in Section 2.4.With the input only.interval= TRUE (default), only the quantiles for the 1level prediction interval are calculated, stored in qu.l and qu.u.Input only.interval= FALSE provides additionally Xpred containing sample.length(default 500) samples from the predictive distribution in each time point of the observations (except the first).In both cases, with plot.pred= TRUE, two figures are produced.On the left side, the data trajectories are compared with the prediction intervals and on the right side, the coverage rate is depicted which is stored in entry coverage.rate,namely the amount of series covered by the prediction intervals for each time point.The last class entry estim stores the results from the Bayes.fitobject in a list.Other input parameters are burnIn and thinning which allow for the choice of other burn-in phase and thinning rate than proposed in the Bayes.fitobject.
For the Bayes.predclass object, two plot methods are available.plot() repeats the figures that are created with the plot.pred= TRUE command in the pred method.plot2compare() compares up to three Bayes.predobjects, where in a first figure the prediction intervals are presented in colors black, red and green and the observed data series in grey and in a second figure the corresponding coverage rates are compared.With the input parameter names a vector of characters to be written in a legend can be indicated.
Validation of the MSDE is produced by function valid.For example the individual validation of the first trajectory is plotted Figure 5, the first two graphs on the left, using This illustrates the good estimation of the random effects: a beam of trajectories with the true one in the middle and the lining up of the quantiles.
Finally, we can predict some trajectories using pred.Predictions are shown on Figure 6, as a result of predPOU <-pred(estim_param) of predicted trajectories (right) is close to the true one (middle).The lining up of the predicted random effects versus the estimated random effects (left) shows the goodness of the prediction from the estimated density, thus of the estimation of the density.

Bayesian estimation
Bayesian method is applied to Example 2. Priors are constructed from the true values, but default values can be used.The fixed effect β has a small acceptance rate, explaining the dependent chain (Figure 7 top right).This is due to a very sharp likelihood because of the large amount of observations (N • M ) in comparison to the random effect (N ).
Predictions in the Bayesian framework and the corresponding Figure 8 is obtained by pred.result<-pred(estim_bayes) Figure 8 shows the beam of simulated data trajectories together with the 95% prediction interval.Coverage rates are shown on the right plot and we see that the intervals hold the level.
5 Package mixedsde through a real data example A real dataset is available (neuronal.data.rda)through lists of a matrix X and a vector times.We detail below the analysis of this dataset, following the next steps: run the two random effects model with both the parametric and nonparametric procedure; choose the number of random effects depending on the variability of the estimators (A j,1 , A j,2 ), on the shape of f h and the variance Ω.

Neuronal data (neuronal.data)
Neurons are the basement of nervous system and each neuron is connected with around 1000 other neurons.They are communicating through emission of electrical signal.We focus on the dynamic of the neuron membrane potential between two spikes emission measured in volts as the difference of ions concentration between the exterior and the interior of the cell.Data are obtained from one single neuron of a pig.Data are composed of M = 240 membrane potential trajectories with N = 2000 equidistant observation times.Time step is δ = 0.00015 [s] and observation time is T = 0.3 [s].Data are uploaded using data(neuronal.data).They are presented on Figure 9.
These data have been previously analysed with a Ornstein-Uhlenbeck model with one additive random effect (α j ): Picchini et al. (2008) and Picchini et al. (2010) use parametric methods assuming the normality of the random effect, and Dion (2014) with a nonparametric method.Here α j represents the local average input that the neuron receives after the j th spike.The initial voltage (the value following a spike) is assumed to be equal to the resting potential and set to zero: x j = 0. Parameter β j (non negative) is the time constant of the neuron.It was fixed in Picchini et al. (2008) and Picchini et al. (2010).
In this new analysis, we assume that both α j and β j may change from one trajectory to another because of other neurons or environment influence, for example.Indeed, the form of each trajectory lead us to think that this is the good model: each one has its mean-revert value and its own speed to reach this value.There is no reason to assume that the speed is always the same, but looking at the trajectories the stationary state seem reached nearly at the same time, thus the second random effect should have a small variance.

Fitting with MSDEs
Our goal is also to compare the two models OU and CIR, both with two random effects, and two approaches: the nonparametric density estimation and the parametric density estimation.Let us remark that for the CIR model the algorithm removes two trajectories: 168 and 224, because they contain negatives values.For two random effects the command is estim <-mixedsde.fit(times,X=X, model= model, random = random, estim.method= estim.method) and they can be found in the help data file (command ?neuronal.data).We first apply the two frequentist approaches on models with two random effects.Kurtosis and skewness of the distribution of the estimation A j of the random effects given in Table 2 are not closed to a symmetric distribution.The bidimensional density of (α j , β j ) is estimated for both models with the parametric and nonparametric methods running function mixedsde.fit.Figure 10 gives the 4 estimated marginals.The blue (black) is for the OU model and the green (grey) for the CIR model.The dotted lines are the estimations from the parametric method, the plain lines for the nonparametric estimation.Parametric and nonparametric estimators are close, except for the second random effect with the OU model.Indeed, parametric estimation produces a very small variance for the second random effect, suggesting it could be fixed.Would this assumption be valid, it explains the difference with the nonparametric estimator which is not stable if the variance is to small.Estimation of σ is σ = 0.0136 for the OU model and σ = 0.163 for the CIR model.
To compare with previous literature results, we focus on the OU model.To select the number and the position of the random effect, we run the code with one random effect, additive or multiplicative: random=1 or random = 2, for both models estimating the common fixed parameter.Estimators of the means µ 1 , µ 2 and standard deviations ω 1 , ω 2 are given in Table 3 and compared to values obtained in Picchini et al. (2008) and Picchini et al. (2010).Criteria AIC and BIC are also given in Table 3.The preferred model is the one minimizing both criteria.Thus, the OU model with one additive random effect φ j = α j and β = 37 seems to be the best model to describe these data.The summary method gives for the kurtosis: 4.55 and for the skewness -0.95.Also σ = 0.0136.Estimated densities obtained for this model with β = 37 are given in Figure 11.The dotted line is the result of the parametric estimation and the plain line of the nonparametric estimation, plotted on the histogram of the A j (T )'s.The nonparametric estimation detects a left tail that is not detected by the parametric one.Otherwise both estimators are very close.
The OU model with random= 1 is then validated with valid function.Figure 12 illustrates the result for a random trajectory (number 232): 100 simulated trajectories (black) and true trajectory (X 232 , red) (left plot) and quantiles of the true measurement among the 100 simulated points at each time points versus uniform quantiles.The qq-plot is satisfactory (compared to the graph obtained on simulated data Figure 5).Finally some prediction plots are performed (not shown) with the pred method and they confirm that model OU with random = c(1,2) with the parameters obtain from the parametric estimation, and the OU model with random = 1 and β = 37 produce very close trajectories and could be both validated.
We then apply the Bayesian procedure.As already mentioned, for the Bayesian procedure, large data sets are a problem because of the very long running time.Therefore, we thin the data ; Picchini et al. (2010) (Hermite expansion of the likelihood); Delattre et al. (2013) (explicit integration of the Girsanov likelihood) or Delattre et al. (2016) (mixture of Gaussian distributions for the random effects); for parametric Bayesian estimation, we can cite Oravecz et al. (2009) (restricted to Ornstein-Uhlenbeck) and Hermann et al. (2016) (general methodology); for non-parametric estimation, we can cite Comte et al. (2013); Dion (2014); Dion and Genon-Catalot (2015) (kernel estimator and deconvolution estimators).
Prediction (see Section 2.4) is implemented in function pred.Main inputs of the function are • Freq.fit or Bayes.fitobject • invariant TRUE if the new trajectories are simulated according to the invariant distribution • level level of the empiric prediction intervals (default 0.05) • plot.predTRUE to generate a figure (default TRUE)

Figure 1 -Figure 2 -
Figure1-Simulated example 1 (CIR with one Gamma random effect), nonparametric estimation.Left: histogram of estimated random effects (A j ) and nonparametric estimation of f .Right: qqplot of (A j ) versus a Normal sample (true distribution is Gamma) (left and right) is provided by plot(estim_param) valid(estim_param)

Figure 8 -
Figure 8 -Simulated example 2 (OU with one Gaussian random effect) Bayesian estimation.Left: predicted trajectories in black and 95% prediction interval in grey (green).Right: coverage rates: amount of observed values covered by the prediction intervals

Figure 11 -
Figure 11 -Neuronal data, OU model, α random, β fixed, frequentist approaches.Estimation of the density f : parametric estimation in blue (black) plain line, non-parametric estimation blue (black) dotted line and histogram of the A j 's

Table 1 -
Summary of the different methods for the two S4-classes Freq.fit and Bayes.fitresulting of the function mixedsde

•
Freq.fit or Bayes.fitobject • plot.valid 1 to generate a figure (default value is 1) • numj a specific individual trajectory to validate (default all trajectories) • Mrep number of simulated trajectories (default value 100) Each observation X numj (t k ) is compared with the Mrep simulated values (X 1 numj (t k ), . . ., X Mrep numj (t k )).If plot.valid=1, two plots are produced.Left: plot of the Mrep new trajectories (black) and the true trajectory number numj (in grey/red).Right: quantile-quantile plot of the quantiles of a uniform distribution and the N quantiles obtained comparing X numj (t k ) with the Mrep simulated values (X 1 numj (t k ), . . ., X