Using the Monte Carlo simulation
So, you’ve created a financial plan for your clients and – hopefully – everything looks great. But, wait! The plan contains one very dangerous assumption: namely, that the clients' investment return will be the same, in each-and-every year. Generating reasonable returns entails the acceptance of 'risk', which means, by definition, one cannot expect to get the same return, year-in, year-out. Rather, there will be good, and bad years, occurring in a completely unpredictable sequence. The real problem is that the exact sequence of good, and bad years is critical.
In this context, the Monte Carlo simulation can be very useful for 'stress testing' a financial plan. The simulation ‘randomizes’ investment returns, across the clients' lifetime, and repeats the exercise many times, after which you will know how often (of the total iterations) your strategy 'succeeded', in the sense that your clients didn’t run out of money, at any point along the way.
Key Takeaway: In a context of uncertainty, the Monte Carlo simulation can be useful in assessing the likelihood that your clients will be able to achieve their lifetime financial objectives, without running out of money at any point along the way.
Asset Allocations Must be Used to Run the Simulation
As discussed elsewhere, investment, and pension accounts (and savings accounts, if desired) can be grown using either simple, 'fixed' growth rates, or asset allocations (model portfolios). Only accounts to which an asset allocation is being applied will come into play when running the Monte Carlo simulation.
To run the simulation, a plan must have at least one investment, or pension account using an asset allocation to determine the rate of investment growth. Why? Because the assumptions that underpin the asset allocation allow for a range of possible returns, in accordance with the 'probability distribution', which we’ll get to, shortly. To reiterate, the assumed investment return for any given asset class will range along a spectrum, from the extremely good, to the extremely poor.
For more detail on this latter point, please keep reading.
If a large proportion of the accounts have been assigned a pre-determined 'fixed growth' rate, the output from the Monte Carlo will tend towards either 0%, or 100% depending on whether the client’s goals can be met at this fixed growth rate. In using fixed growth rates, we have nullified the element of chance that was the reason for utilizing the Monte Carlo simulation in the first place.
Key Takeaway: The Monte Carlo is intended for use in situations where most (or all) of one’s investment, and/or pension accounts are utilizing an asset allocation.
Market Assumptions and the Monte Carlo Simulation
The Monte Carlo simulation generates 'stochastic' outputs, predicated upon the set of market assumptions being used. ‘Market assumptions’ refers to the set of asset class assumptions, as shown in Preferences (Adviser - Plan Preferences):
- The assumed 'mean' return, and standard deviation (potential volatility) values, for each asset class
- The correlations, which indicate how the different asset classes interact (are correlated) with each other.
Note that the UK release of Voyant Adviser and Adviser Go arrive packaged with a set of default market assumptions provided by Rayner Spencer Mills Research (RSMR). Additional information on this market data set can be found here.
Other releases of Voyant for the US, Canada and Ireland are packaged with different sets of default market assumptions.
Market assumptions may also vary depending on whether your firm has opted to collect and use its own market data. Any subscriber is able to customize the market assumptions but, typically, bespoke market assumptions will be added to the software, and managed as part of Voyant's optional rebranding service.
As already suggested, the market assumptions in use can be viewed by opening the respective client record in Voyant Adviser:
- Navigate to the Preferences screen (via the small 'cog' icon, top-left)
- On the right-hand side (Plan Preferences), select the option labelled 'Market Assumptions'
Given the assumptions in use, the software constructs a 'normal' (i.e. bell curve) probability distribution for each asset class, and this probability distribution defines the range of possible returns, to individual asset classes, and the relative likelihood of those returns.
Perhaps the most fundamental property of the 'bell curve' distribution can be captured in the rule, often referred to as the "68-95-99.7% rule", which states that approximately 68% of the values (e.g. investment outcomes) are assumed to fall within one standard deviation either side from the mean, about 95% of values fall within two standard deviations either side from the mean, and that about 99.7% of values fall within three standard deviations from the mean; this suggests that a large majority of investment returns will be clustered fairly tightly around the mean value. The guide linked-to here, includes more detail, relating to a 'bell curve' distribution.
Key Takeaway: It is assumed that investment returns conform to a 'bell curve' distribution, meaning that returns are likely to be clustered around the ‘mean’, consistent with the "68-95-99.7% rule".
How Are Annual Returns Randomised?
Recall that a given set of market assumptions will contain the names of each asset class being used, as well as the ‘mean’, and standard deviation values for each asset class. The asset classes are more-or-less correlated, as defined in the 'correlation matrix'. In running a single iteration, the simulation will randomly select a percentile value for each asset class. This randomized percentile selection will then be repeated for each year of the plan.
For example: if a given client's timeline runs for 50 years, and your choice of 'market assumptions' contains 10 individual asset classes, then – for a single iteration (i.e. across the lifetime of the plan) – the simulation will generate 500 (i.e. 50 x 10) random percentile values.
The simulation will take the randomly-generated percentile values, and – given the assumed 'correlations' between the asset classes – will apply an algorithm to ensure (so far as possible) that investment outcomes (for each asset class) are consistent with the aforementioned correlations. This operation is called a 'Cholesky decomposition'. Some correlation matrices make it effectively impossible for the software to perform this operation, in which case the simulation will remain completely random. This is not necessarily a problem inasmuch as the correlations between asset classes represent only a 'tendency', and not an 'iron law'. In other words, there is nothing that prohibits even negatively-correlated asset classes from moving in the same direction over a given period-of-time.
Provided that it is possible to 'correlate' asset classes, the Monte Carlo simulation will adjust its randomly generated percentiles so as to take account for the relationship between them. For example, suppose that two asset classes are highly correlated (e.g. a value of 0.8, as defined within one’s market assumptions), then the Monte Carlo simulation generates a random 90th percentile return for asset class 1 and a 15th percentile return for asset class 2. The Cholesky decomposition will then lead the software to adjust these two randomly-assigned percentiles, to values that are somewhat more consistent (or more probable) given the high correlation value. The percentile for asset class 1 might be adjusted downwards, say to the 85th percentile, while the percentile for asset class 2 could be adjusted upward, e.g. to the 60th percentile, thereby narrowing the difference between the two, based on the correlation value.
Key Takeaway: The Monte Carlo simulation does not guarantee that 'highly correlated' asset classes will always move in the same direction (or that negatively-correlated asset classes will move in opposite directions) – but it puts weight behind the assumed tendency. The stronger the tendency, the greater the weight.
What happens when I run the simulation?
The upper chart of the simulation shows the simple deterministic cash flow, based on the default mean (50th percentile) return values, i.e. what the cash flow looks like before running the Monte Carlo.
The simulation runs for the lifetime of the plan, until the final mortality event, in accordance with one’s basic assumptions regarding lifespan. When one hits the button labelled 'Run Simulation', the software starts running the iterations, and the bottom half of the split-screen starts to adjust as more iterations are completed. What you see, on completion of the iterations, is that the lower chart presents:
(1) The probability of success in individual years… and…
(2) The overall 'Probability of Success', across the lifetime of the plan (with the latter being visible directly below the chart) – as illustrated below:
Q: Update Frequency – what is this?
A: The number of iterations run by the simulation before the screen refreshes.
Key Takeaway: The simulation will give an indication of years in which your clients are relatively more/less vulnerable (to the occurrence of a shortfall), as well as the overall probability of success.
What is a successful iteration?
The stated 'Probability of Success' – visible directly below the chart itself – is indicative of completed iterations that did not give rise to an expense shortfall in any single year of the clients’ plan. In the course of a single iteration, the simulation randomly selects a percentile value from along the spectrum of possible returns and the software derives a rate-of-return for each stochastically-modelled (investment or pension) account. The simulation repeats this exercise for each, and every year of the plan. An iteration in which no shortfall occurs, in any single year, is considered a success. This exercise is repeated for the entire number of iterations you have chosen to run.
A successful iteration indicates that the magnitude and sequence of investment returns, across the lifetime of the plan, were such that no shortfall occurred. The 'Probability of Success' may be taken as some indication of the plan’s robustness, and degree of vulnerability to short-term fluctuations, or to a run of negative returns.
The more iterations you run, the more likely it becomes that the plan will be tested against the possible extremes of performance, and extended runs of poor returns and the more confidence one may be able to place in the eventual result. The simulation cannot be any better than the assumptions that it contains (e.g. that the volatility of an asset class will remain constant), and so these things should always be interpreted cautiously.
The Monte Carlo simulation is an exercise in probability. A plan that receives a 90% probability of success after one hundred iterations is probably more likely to be successful than one that returns a 10% probability of success. However, a plan that returns a 100% 'probability of success' could either be a very safe plan, or the probability could be due to the small number of iterations used when running the simulation, meaning the sample you’ve captured is not genuinely representative of the likely range of returns, and/or the possible sequencing of those returns.
Key Takeaway: Only by running a sufficiently large number of iterations, can the simulation provide statistically valid conclusions. A high probability of success, in conjunction with a large number of iterations may give one reason to believe that the clients’ plan is robust, in terms of being able to achieve the agreed objectives.
Iterations – Does the number of iterations correlate to the accuracy of the simulation?
Accuracy is not a word we should use when it comes to Monte Carlo. But in terms of probability, the number of iterations will constrain the range of outcomes that are likely to manifest. The other way of putting this same point is that the more iterations one runs, the more likely that the plan will be tested against possible extremes of performance and extended runs of poor returns. As such, there is a correlation between the number of iterations, and the confidence one might place in the result. Voyant cannot tell you how many iterations you ought to run but, to draw statistically valid ('significant') conclusions, your sample size (the number of iterations) will need to be sufficiently large.
It is true, of course, that the more iterations you run, the longer the simulation will take to run its course!
Key Takeaway: Monte Carlo is a tool for testing the tolerance of a plan to market forces. Therefore, the larger the number of iterations run, the more likely your simulation will capture a representative sequence of occurrences, to provide some indication of how tolerant the plan is to these market forces.
For the Monte Carlo, should one use arithmetic or geometric mean returns?
The question can be illustrated with the following example:
If you invest £1,000, and get a return of 100%, over the first 12 months, you have doubled your money to £2,000; a return of -50%, in Year Two, however, would mean losing all the gains accumulated in Year One. Despite an arithmetic rate-of-return of 25% per annum, you are back exactly at Square One, in terms of compounded wealth. Introducing 'sequence of return' risk into a client’s financial plan is the reason for using the Monte Carlo simulation in the first place, so one doesn't want to start with data that has already corrected (albeit imperfectly) for the 'volatility drag' that is a function of variable returns.
The difference between arithmetic and geometric return values is elaborated somewhat here >>>
Key Takeaway: You don’t want to double-count the effect of 'volatility drag', and so you should opt for 'arithmetic mean' returns, where possible.
To end on a pragmatic note, a weight of evidence confirms that we humans tend to have short memories, and constant reminders (of the possible pitfalls) are often necessary.
There may be good reason to exercise caution regarding the assumptions on which the simulation is predicated, including the assumptions that:
(1) Investment returns, over time, are 'normally distributed'.
(2) Variance of a given asset class will remain constant over time.
Recent experience (e.g. 2008), might lead one to wonder whether the model is liable to consistently understate the occurrence of so-called 'black swan' events, which – given the probability distribution – are many standard deviations from the assumed ‘mean’. There’s no perfect way of getting a handle on the future likelihood of such occurrences – we should remind ourselves, regularly, that past performance must be viewed as nothing more than a guide, and not as a strait jacket that – somehow, magically – constrains, or determines future outcomes.
Key Takeaway: The simulation cannot be any better than the assumptions by which it is underpinned. History gives reasons for questioning some of the inherent assumptions, so 'proceed with caution' might be a byword for the wise.