A fixed point problem is one where we seek a vector, X, for a function, f, such that f(X) = X. The solution of many such problems can be accelerated by using a fixed point acceleration algorithm. With the release of the FixedPoint package there is now a number of algorithms available in R that can be used for accelerating the finding of a fixed point of a function. These algorithms include Newton acceleration, Aitken acceleration and Anderson acceleration as well as epsilon extrapolation methods and minimal polynomial methods. This paper demonstrates the use of fixed point accelerators in solving numerical mathematics problems using the algorithms of the FixedPoint package as well as the squarem method of the SQUAREM package.
R has had a number of packages providing optimisation algorithms for many years. These include traditional optimisers through the optim function, genetic algorithms through the rgenoud package (Mebane, Jr. and Sekhon 2011) and response surface global optimisers through packages like DiceKriging (Roustant et al. 2012). It also has several rootfinders like the uniroot method and the methods of the BB package (Varadhan and Gilbert 2009).
Fixed point accelerators are conceptually similar to both optimisation
and root finding algorithms but thus far implementations of fixed point
finders have been rare in R. Prior to FixedPoint’s
(Baumann and Klymak 2018) release the squarem method of the SQUAREM
package
This paper shows how the finding of a fixed point of a function can be accelerated using fixed point accelerators in R. The next section starts by with a brief explanation of fixed points before a number of fixed point acceleration algorithms are discussed. The algorithms examined include the Newton, Aitken and Scalar Epsilon Algorithm (SEA) methods that are designed for accelerating the convergence of scalar sequences. Five algorithms for accelerating vector sequences are also discussed including the Vector Epsilon Algorithm (VEA), Anderson acceleration and three minimal polynomial algorithms (MPE, RRE and the squarem method provided in the SQUAREM package). The FixedPoint package is then introduced with applications of how it can be used to find fixed points. In total five problems are described which show how fixed point accelerators can be used in solving problems in asset pricing, machine learning and economics. Here the intent is not only to showcase the capabilities of FixedPoint and SQUAREM but also to demonstrate how various problems may be able to be recast in an iterate way in order to be able to exploit fixed point accelerators. Finally this paper uses the presented numerical problems to perform a speed of convergence test on all of the algorithms presented in this paper.
A fixed point problem is one where we look for a vector,
Much of the intuition behind the use of optimisers and rootfinders
carries over to the use of fixed point acceleration algorithms. Like a
function may have multiple roots and multiple local optima, a function
may have multiple fixed points. The extreme case of this is the identity
mapping
The first algorithm implemented in FixedPoint is the “simple” method
which merely takes the output of a function and feeds it back into the
function. For instance starting with a guess of
Here we will define
The implementation of the Newton method in FixedPoint uses this formula to predict the fixed point given two previous function iterates. This method is designed for use with scalar functions. If it is used with higher dimensional functions that take and return vectors then it will be used elementwise.
Consider that a sequence of scalars
The implementation of the Aitken method in FixedPoint uses this formula to predict the fixed point given two previous iterates. This method is designed for use with scalar functions. If it is used with higher dimensional functions that take and return vectors then it will be used elementwise.
The epsilon algorithms introduced by Wynn (1962) provides an alternate
method to extrapolate to a fixed point. This paper will present a brief
numerical example and refer readers to Wynn (1962) or (Smith et al. 1987)
for a mathematical explanation of why it works. The basic epsilon
algorithm starts with a column of simple function iterates. If
Where
This can be seen in the figure 1 which uses an
epsilon method to find the fixed point of
Note that the values in columns C and E are poor extrapolations. Only the even columns D,F provide reasonable extrapolation values. For this reason an even number of iterates (an odd number of values including the starting guess) should be used for extrapolation. FixedPoint will enforce this by throwing away the first iterate provided if necessary to get an even number of iterates.
In the vector case this algorithm can be visualised by considering each
entry in the above table to contain a vector going into the page. In
this case the complication emerges from the inverse term in equation
(4): there is no clear interpretation of
FixedPoint implements two minimal polynomial algorithms, Minimal
Polynomial Extrapolation (MPE) and Reduced Rank Extrapolation (RRE). The
key intuition for these methods is that a linear combination of previous
iterates is taken to generate a new guess vector. The coefficients of
the previous iterates are taken so that this new guess vector is
expected to not be changed much by the function.
To first define notation, each vector (the initial guess and subsequent
iterates) is defined by
For the MPE method the extrapolated vector, is found by:
The only effective fixed point accelerator that was available in R prior to the release of the FixedPoint package was the squarem method provided in the SQUAREM packages. This method modifies the minimal polynomial algorithms with higher order terms and can thus be considered as a variant of the MPE algorithms. The squarem method is primarily intended to accelerate convergence in the solution of expectation maximisation problems but can be used more generally with any function that is a contraction mapping (Varadhan and Roland 2008).
Anderson (1965) acceleration is an acceleration algorithm that is well
suited to functions of vectors. Similarly to the minimal polynomial
algorithms it takes a weighted average of previous iterates. It is
different however to all previous algorithms in that the previous
iterates used to generate a guess vector need not be sequential but any
previous iterates can be used. Thus it is well suited to parallelising
the finding of a fixed point.
Consider that we have previously run an N-dimensional function M times.
We can define a matrix
In Anderson acceleration we assign a weight to apply to each column of
the matrix. This weight vector is M-dimensional and can be denoted
With these weights we can then create the expression for the next
iterate as:
Some functions have a restricted input space. Acceleration schemes can
perform badly in these settings by proposing vectors that sit outside of
the required input space. As an example consider the following
library(FixedPoint)
SimpleVectorFunction = function(x){c(0.5*sqrt(x[1] + x[2]), abs(1.5*x[1] + 0.5*x[2]))}
FPSolution = FixedPoint(Function = SimpleVectorFunction, Inputs = c(0.3,900),
Method = "Anderson")
Unfortunately an error will occur here. After four iterates the Anderson
method decides to try the vector
In cases like this there are a few things a user can try. The first is
to change the function to another function that retains the same fixed
points. In the above case we could change the function to take the
absolute value of the sum of the two inputs before taking the square
root. Then after finding a fixedpoint we can verify if the sum of the
two entries is positive and hence it is also a solution to the original
function. Another measure that could be tried is to change the initial
guess. Finally we could change the acceleration method. The simple
method will be robust in this case as the function will never return an
Output vector that sums to a negative number. It is still likely to be
slow however.
FPSolution = FixedPoint(Function = SimpleVectorFunction, Inputs = FPSolution$Inputs,
Outputs = FPSolution$Outputs, Method = "Simple", MaxIter = 5)
# Now we switch to the Anderson Method again. No error results because we are
# close to fixed point.
FPSolution = FixedPoint(Function = SimpleVectorFunction, Inputs = FPSolution$Inputs,
Outputs = FPSolution$Outputs, Method = "Anderson")
Another example of a restricted input space is shown in the consumption smoothing example presented later in this paper. In this example the input vector must reproduce a monotonic and concave function. All of the vectorised methods presented in this paper take a combination of previous iterates all of which take and return vectors representing monotonic and concave functions. As a result these methods will only propose vectors representing monotonic and concave functions. By contrast the Newton, SEA and Aitken methods do not take into account the entire vector when proposing the fixedpoint value for each element of the vector and as a result some of the input vectors proposed by these methods may not be valid. Ensuring that a vectorised method is chosen is thus sufficient in this case to ensure that each vector tried is within the input space of the function for which a fixedpoint is sought.
Most fixed point acceleration algorithms will fail in finding the fixed
point of a function that converges by a fixed increment. For instance we
may have a function that takes
This type of convergence is problematic for all algorithms presented
except for the simple method. The basic problem can be illustrated
simply by looking at the Newton and Aitken methods. For the Newton
method consider the derivative in equation (1) which is
approximated by
More generally, when there is convergence by constant increments then then the fixed point method receives information about what direction to go in but no information about how far to go. This is a complication that is common to all fixed point acceleration methods. In these cases it may be possible to change the function to make it converge by varying increments while retaining the same set of fixed points. An example of this is shown in the perceptron example presented later in this paper. In other cases where it is not possible to modify the function, it is advisable to use the simple method.
For the simplest possible example we will use the FixedPoint package
to accelerate the solution for a square root. Consider we want to
estimate a square root using the Babylonian method. To find the square
root of a number
library(FixedPoint)
SequenceFunction = function(tn){0.5*(tn + 100/tn)}
FP = FixedPoint(Function = SequenceFunction, Inputs = 6, Method = "SEA")
The benefit of fixed point accelerators is more apparent when applied to vectorised functions. For a simple example consider the below function where each element of the returned vector depends on both elements of the input vector:
Vec_Function = function(x){c(0.5*sqrt(abs(x[1] + x[2])), 1.5*x[1] + 0.5*x[2])}
FP_Simple = FixedPoint(Function = Vec_Function, Inputs = c(0.3,900),
Method = "Simple")
FP_Anderson = FixedPoint(Function = Vec_Function, Inputs = c(0.3,900),
Method = "Anderson")
Here it takes 105 iterates to find a fixed point with the simple method but only 14 with the Anderson acceleration method.
For a more complex example consider we want to model the diffusion of
gas in a two dimensional space. We set up a two dimensional grid split
into
phi = 10
Numbering = matrix(seq(1,phi^2,1), phi) # Numbering scheme for squares
NeighbourSquares = function(n,phi){
SurroundingIndexes = c(n)
if (n %% phi != 1){SurroundingIndexes = c(SurroundingIndexes, n-1)} # above
if (n %% phi != 0){SurroundingIndexes = c(SurroundingIndexes, n+1)} # below
if (n > phi){SurroundingIndexes = c(SurroundingIndexes, n-phi)} # right
if (n <= phi^2-phi){SurroundingIndexes = c(SurroundingIndexes, n+phi)} # left
return(SurroundingIndexes)
}
TwoDimensionalDiffusionIteration = function(x, phi){
xnew = x
for (i in 1:(phi^2)){
Subset = NeighbourSquares(i, phi)
xnew[i] = mean(x[Subset])
}
xnew[1] = 0
xnew[phi^2] = 1
return(xnew)
}
FP = FixedPoint(Function = function(x) TwoDimensionalDiffusionIteration(x,phi),
Inputs = c(rep(0,50), rep(1,50)), Method = "RRE")
The fixed point found here can then be used to plot the density of oxygen over the space. The code for this is below while the plot can be found in figure 2.
x = 1:phi
y = 1:phi
oxygen_densities = matrix(FP$FixedPoint, phi)
persp(x, y, oxygen_densities)
Consider now we are modeling a pure exchange economy and want to
determine the equilibrium prices given household preferences and
endowments. We have
We use this approach in the code below for the case of 10 goods with 8 households. For exposition sake we generate some data below before proceeding to find the equilibrium price vector.
# Generating data
set.seed(3112)
N = 8
G = 10
Endowments = matrix(rlnorm(N*G), nrow = G)
Gamma = matrix(runif(N*G), nrow = G)
# Every column here represents a household and every row is a good.
# So Endowments[1,2] is the second household's endowment of good 1.
# We now start solving for equilibrium prices:
TotalEndowmentsPerGood = apply(Endowments, 1, sum)
TotalGammasPerHousehold = apply(Gamma, 2, sum)
LambdasGivenPriceVector = function(Price){
ValueOfEndowmentsPerHousehold = Price * Endowments
TotalValueOfEndowmentsPerHousehold = apply(ValueOfEndowmentsPerHousehold, 2, sum)
return(TotalGammasPerHousehold /TotalValueOfEndowmentsPerHousehold)
}
IterateOnce = function(Price){
Lambdas = LambdasGivenPriceVector(Price) # eqn 16
GammaOverLambdas = t(apply(Gamma, 1, function(x) x / Lambdas))
SumGammaOverLambdas = apply(GammaOverLambdas,1,sum)
NewPrices = SumGammaOverLambdas/ TotalEndowmentsPerGood # eqn 18
NewPrices = NewPrices/NewPrices[1] # normalising with numeraire
return(NewPrices)
}
InitialGuess = rep(1,10)
FP = FixedPoint(Function = IterateOnce, Inputs = InitialGuess, Method = "VEA")
The fixed point contained in the FP object is the vector of equilibrium prices.
The perceptron is one of the oldest and simplest machine learning
algorithms (Rosenblatt 1958). In its simplest form, for each observation
it is applied it uses an N-dimensional vector of features
The innovation of the perceptron was its method for training its
weights,
Where
This training algorithm can be rewritten as fixed point problem. We can
write a function that takes perceptron weights, loops over the data
updating these weights and then returns the updated weight vector. If
the perceptron classifies every observation correctly then the weights
will not update and we are at a fixed point.
Most acceleration algorithms perform poorly in accelerating the convergence of this perceptron training algorithm. This is due to the perceptron often converging by a fixed increment. This occurs because multiple iterates can result in the same observations being misclassified and hence the same change in the weights. As a result we will use the simple method which is guaranteed to be convergent for this problem (Novikoff 1963).
# Generating linearly separable data
set.seed(10)
data = data.frame(x1 = rnorm(100,4,2), x2 = rnorm(100,8,2), y = -1)
data = rbind(data,data.frame(x1 = rnorm(100,-4,2), x2 = rnorm(100,12), y = 1))
# Iterating training of Perceptron
IteratePerceptronWeights = function(w, LearningRate = 1){
intSeq = 1:length(data[,"y"])
for (i in intSeq){
target = data[i,c("y")]
score = w[1] + (w[2]*data[i, "x1"]) + (w[3]*data[i, "x2"])
ypred = 2*(as.numeric( score > 0 )-0.5)
update = LearningRate * 0.5*(target-ypred)
w[1] = w[1] + update
w[2] = w[2] + update*data[i, "x1"]
w[3] = w[3] + update*data[i, "x2"]
}
return(w)
}
InitialGuess = c(1,1,1)
FP = FixedPoint(Function = IteratePerceptronWeights, Inputs = InitialGuess,
Method = "Simple", MaxIter = 1200)
The result of this algorithm can be seen in figure 3. It can be seen that the classification line perfectly separates the two groups of observations.
Only the simple method is convergent here and it is relatively slow
taking 1121 iterations. We can still get a benefit from accelerators
however if we can modify the training algorithm to give training
increments that change depending on distance from the fixed point. This
can be done by updating the weights by an amount proportional to a
concave function of the norm of
IteratePerceptronWeights = function(w, LearningRate = 1){
intSeq = 1:length(data[,"y"])
for (i in intSeq){
target = data[i,c("y")]
score = w[1] + (w[2]*data[i, "x1"]) + (w[3]*data[i, "x2"])
ypred = 2*(as.numeric( score > 0 )-0.5)
if ((target-ypred) != 0){
update = LearningRate * -sign(score) * sqrt(abs(score))
w[1] = w[1] + update
w[2] = w[2] + update*data[i, "x1"]
w[3] = w[3] + update*data[i, "x2"]
}
}
return(w)
}
FP = FixedPoint(Function = IteratePerceptronWeights, Inputs = InitialGuess,
Method = "MPE")
For an application in finance consider the pricing of a perpetual
American put option on a stock. It never expires unless it is exercised.
Its value goes to zero however if the spot price rises to become
Given the risk neutral pricing principle the returns from holding the
stock must equal the risk-free rate. Hence we must have
d = 0.05
sigma = 0.1
alpha = 2
S = 10
chi = 0
p = (exp(d) - exp(-sigma) ) / (exp(sigma) - exp(-sigma))
# Initially we guess that the option value decreases linearly from S
# (when the spot price is 0) to 0 (when the spot price is \alpha S).
UnderlyingPrices = seq(0,alpha*S, length.out = 100)
OptionPrice = seq(S,chi, length.out = 100)
ValueOfExercise = function(spot){S-spot}
ValueOfHolding = function(spot, EstimatedValueOfOption){
if (spot > alpha*S-1e-10){return(chi)}
IncreasePrice = exp(sigma)*spot
DecreasePrice = exp(-sigma)*spot
return((p*EstimatedValueOfOption(IncreasePrice) +
(1-p)*EstimatedValueOfOption(DecreasePrice)))
}
ValueOfOption = function(spot, EstimatedValueOfOption){
Holding = ValueOfHolding(spot, EstimatedValueOfOption)*exp(-d)
Exercise = ValueOfExercise(spot)
return(max(Holding, Exercise))
}
IterateOnce = function(OptionPrice){
EstimatedValueOfOption = approxfun(UnderlyingPrices, OptionPrice, rule = 2)
for (i in 1:length(OptionPrice)){
OptionPrice[i] = ValueOfOption(UnderlyingPrices[i], EstimatedValueOfOption)
}
return(OptionPrice)
}
library(SQUAREM)
FP = squarem(par=OptionPrice, IterateOnce)
plot(UnderlyingPrices,FP$par, type = "l",
xlab = "Price of Underlying", ylab = "Price of Option")
Here the fixed point gives the price of the option at any given level of the underlying asset’s spot price. This can be visualized as seen in figure 4.
plot(UnderlyingPrices,FP$FixedPoint, type = "l",
xlab = "Price of Underlying", ylab = "Price of Option")
A common feature of macroeconomic models is the simulation of consumer
spending patterns over time. These computations are not trivial, in
order for a consumer to make a rational spending decision they need to
know their future wellbeing as a function of their future wealth. Often
models exhibit infinitely lived consumers without persistent shocks and
in this setting the relationship between wealth and wellbeing can be
found with a fixed point algorithm. Consider an infinitely lived
consumer that has a budget of
library(FixedPoint)
library(schumaker)
library(cubature)
delta = 0.2
beta = 0.99
BudgetStateSpace = c(seq(0,1, 0.015), seq(1.05,3,0.05))
InitialGuess = sqrt(BudgetStateSpace)
ValueGivenShock = function(Budget, epsilon, NextValueFunction){
optimize(f = function(x) epsilon*(x^delta) + beta*NextValueFunction(Budget - x + 1),
lower = 0, upper = Budget, maximum = TRUE)
}
ExpectedUtility = function(Budget, NextValueFunction){
if (Budget > 0.001){
adaptIntegrate(f = function(epsilon) ValueGivenShock(Budget,
epsilon,NextValueFunction)$objective * dlnorm(epsilon),
lowerLimit = qlnorm(0.0001), upperLimit = qlnorm(0.9999))$integral
} else {
beta*NextValueFunction(1)
}
}
IterateOnce = function(BudgetValues){
NextValueFunction = schumaker::Schumaker(BudgetStateSpace, BudgetValues,
Extrapolation = "Linear")$Spline
for (i in 1:length(BudgetStateSpace)){ # This is often a good loop to parallelise
BudgetValues[i] = ExpectedUtility(BudgetStateSpace[i], NextValueFunction)
}
return(BudgetValues)
}
FP = FixedPoint(Function = IterateOnce, Inputs = InitialGuess,
Method = "Anderson")
This takes 71 iterates which is drastically better than the 2316 iterates it takes with the simple method. Now the optimal spending amount can be found for any given budget and any income shock. For instance with the following code we can work out what a consumer with a budget of 1.5 and a shock of 1.2 would spend:
NextValueFunction = Schumaker(BudgetStateSpace, FP$FixedPoint)$Spline
ValueGivenShock(1.5, 1.2, NextValueFunction)$maximum
It takes 71 iterates for the Anderson method to find the fixed point,
however we might want to get it going even faster though
parallelisation. The easiest way to do this for this particular problem
is to parallelise the for loop through the budgetspace. For exposition
however we show how to do this by doing multiple iterates at the same
time. We will do this by using six cores and using the parallel
capabilities of the foreach and doParallel packages
(Revolution Analytics and Weston 2015; Microsoft Corporation and Weston 2017). Each node will produce an
different guess vector through the Anderson method. This will be done by
giving each node a different subset of the previous iterates that have
been completed. The first node will have all previous iterate
information. For
This parallel method takes 102 iterates when using six cores which takes
approximately the same time as running
All of the algorithms of the FixedPoint package as well as the squarem
algorithm of the SQUAREM package were run for a variety of problems.
In addition to all of the above problems fixed points were found for
some basic analytical functions such as
Case | Dimensions | Function |
---|---|---|
1 | 1 | Babylonian Square Root |
2 | 1 | cos(x) |
3 | 6 | |
4 | 6 | |
5 | 2 | Simple Vector Function |
6 | 100 | Gas Diffusion |
7 | 3 | Perceptron |
8 | 3 | Modified Perceptron |
9 | 10 | Equilibrium Prices |
10 | 100 | Perpetual American Put |
11 | 107 | Consumption Smoothing |
Case | Simple | Anderson | Aitken | Newton | VEA | SEA | MPE | RRE | squarem |
---|---|---|---|---|---|---|---|---|---|
1 | 6 | 7 | 7 | 7 | 6 | 6 | 6 | 6 | 6 |
2 | 58 | 7 | 11 | 9 | 13 | 13 | 19 | 25 | 55 |
3 | 22 | 12 | 9 | 9 | 13 | 13 | 9 | 10 | 12 |
4 | * | 5 | 3 | 3 | 25 | * | 19 | 7 | * |
5 | 105 | 14 | 67 | 239 | 20 | 25 | 31 | 31 | 44 |
6 | 221 | 26 | 323 | * | 150 | 221 | 44 | 50 | 159 |
7 | 1121 | * | * | * | * | * | * | * | * |
8 | 1156 | * | * | 20 | 75 | 158 | 54 | 129 | 638 |
9 | 11 | 9 | 14 | 24 | 11 | 11 | 11 | 12 | 11 |
10 | 103 | 37 | 203 | * | 108 | 103 | 43 | 52 | 103 |
11 | 2316 | 71 | * | * | * | * | 217 | 159 | 285 |
It can be seen that the Anderson algorithm performed well in almost all cases. The minimal polynomial methods tended to outperform the epsilon extrapolation methods. This is largely in agreement with previous benchmarking performed in Jbilou and Sadok (2000). The MPE tended to generally outperform the RRE and the VEA outperformed the SEA in all cases. The squarem method tended to be outperformed by the standard minimal polynomial methods. While it was generally amongst the slowest methods, the simple method was the most generally applicable, converging in all but one of the test cases studied.
R has had available a multitude of algorithms for rootfinding and multidimensional optimisation for a long time. Until recently however the range of fixed point accelerators available in R has been limited. Before the release of FixedPoint, only the squarem method of the SQUAREM package was available as a general use fixed point accelerator.
This paper examines the use of fixed point accelerators in R. The algorithms of the FixedPoint and SQUAREM packages are used to demonstrate the use of fixed point acceleration algorithms in the solution of numerical mathematics problems. A number of applications were shown. First the package was used to accelerate the finding of an equilibrium distribution of gas in a diffusion setting. The package was then used to accelerate the training of a perceptron classifier. The acceleration of this training was complicated by the training function converging in fixed increments however it was possible to speed up the solution using a fixed point accelerator by changing the training algorithm while retaining the same set of fixed points. A number of problems in economics were then examined. First the equilibrium price vector was found for a pure exchange economy. Next a vector was found that gives the price of a perpetual American put option at various values of the underlying asset’s spot price. Finally the future value function was found for an infinitely lived consumer facing a consumption smoothing problem.
In all of these example applications it can be noted that the solving for a fixed point was accelerated significantly by the use of a fixed point acceleration algorithm. In many cases an accelerator was available that was more than an order of magnitude faster than the simple method. The results indicate that large speedups are available to R programmers that are able to apply fixed point acceleration algorithms to their numerical problem of interest.
library(foreach)
library(doParallel)
cores = 6
NodeTaskAssigner = function(Inputs, Outputs, i, Function){
library(FixedPoint)
library(schumaker)
library(cubature)
Iterates = dim(Inputs)[2]
if (i > 1.5) {IterateToDrop = Iterates-i+1} else {IterateToDrop = 0}
IteratesToUse = (1:Iterates)[ 1:Iterates != IterateToDrop]
Inputs = matrix(Inputs[,IteratesToUse], ncol = length(IteratesToUse), byrow = FALSE)
Outputs = matrix(Outputs[,IteratesToUse], ncol = length(IteratesToUse), byrow = FALSE)
Guess = FixedPointNewInput(Inputs = Inputs, Outputs = Outputs, Method = "Anderson")
Outputs = matrix(Function(Guess), ncol = 1, byrow = FALSE)
Inputs = matrix(Guess, ncol = 1, byrow = FALSE)
return(list(Inputs = Inputs, Outputs = Outputs))
}
# This combines the results returned by each node
CombineLists = function(List1, List2){
width = dim(List1$Inputs)[2] + dim(List2$Inputs)[2]
C = list()
C$Inputs = matrix(c(List1$Inputs , List2$Inputs ), ncol = width, byrow = FALSE)
C$Outputs = matrix(c(List1$Outputs, List2$Outputs), ncol = width, byrow = FALSE)
return(C)
}
# ReSortIterations
# This function takes the previous inputs and outputs from the function, removes
# duplicates and then sorts them in order of increasing convergence.
ReSortIterations = function(PreviousIterates,
ConvergenceMetric = function(Resids){max(abs(Resids))})
{
# Removing any duplicates
NotDuplicated = (!(duplicated.matrix(PreviousIterates$Inputs, MARGIN = 2)))
PreviousIterates$Inputs = PreviousIterates$Inputs[,NotDuplicated]
PreviousIterates$Outputs = PreviousIterates$Outputs[,NotDuplicated]
# Resorting
Resid = PreviousIterates$Outputs - PreviousIterates$Inputs
Convergence = ConvergenceVector = sapply(1:(dim(Resid)[2]), function(x)
ConvergenceMetric(Resid[,x]) )
Reordering = order(Convergence, decreasing = TRUE)
PreviousIterates$Inputs = PreviousIterates$Inputs[,Reordering]
PreviousIterates$Outputs = PreviousIterates$Outputs[,Reordering]
return(PreviousIterates)
}
ConvergenceMetric = function(Resid){max(abs(Resid))}
# Preparing for clustering and getting a few runs to input to later functions:
PreviousRuns = FixedPoint(Function = IterateOnce, Inputs = InitialGuess,
Method = "Anderson", MaxIter = cores)
PreviousRuns$Residuals = PreviousRuns$Outputs - PreviousRuns$Inputs
PreviousRuns$Convergence = apply(PreviousRuns$Residuals, 2, ConvergenceMetric)
ConvergenceVal = min(PreviousRuns$Convergence)
registerDoParallel(cores=cores)
iter = cores
while (iter < 100 & ConvergenceVal > 1e-10){
NewRuns = foreach(i = 1:cores, .combine=CombineLists) %dopar% {
NodeTaskAssigner(PreviousRuns$Inputs, PreviousRuns$Outputs, i, IterateOnce)
}
# Appending to previous runs
PreviousRuns$Inputs = matrix(c(PreviousRuns$Inputs, NewRuns$Inputs),
ncol = dim(PreviousRuns$Inputs)[2] + cores, byrow = FALSE)
PreviousRuns$Outputs = matrix(c(PreviousRuns$Outputs, NewRuns$Outputs),
ncol = dim(PreviousRuns$Outputs)[2] + cores, byrow = FALSE)
PreviousRuns = ReSortIterations(PreviousRuns)
PreviousRuns$Residuals = PreviousRuns$Outputs - PreviousRuns$Inputs
PreviousRuns$Convergence = apply(PreviousRuns$Residuals, 2, ConvergenceMetric)
# Finding Convergence
ConvergenceVal = min(PreviousRuns$Convergence)
iter = iter + cores
}
stopImplicitCluster()
# And the fixed point comes out to be:
PreviousRuns$Outputs[, dim(PreviousRuns$Outputs)[2]]
This article is converted from a Legacy LaTeX article using the texor package. The pdf version is the official version. To report a problem with the html, refer to CONTRIBUTE on the R Journal homepage.
Text and figures are licensed under Creative Commons Attribution CC BY 4.0. The figures that have been reused from other sources don't fall under this license and can be recognized by a note in their caption: "Figure from ...".
For attribution, please cite this work as
Baumann & Klymak, "Fixed Point Acceleration in R", The R Journal, 2019
BibTeX citation
@article{RJ-2019-037, author = {Baumann, Stuart and Klymak, Margaryta}, title = {Fixed Point Acceleration in R}, journal = {The R Journal}, year = {2019}, note = {https://rjournal.github.io/}, volume = {11}, issue = {1}, issn = {2073-4859}, pages = {359-375} }