How Far Should You Trust Market Models?

How far should you trust market modelsWe use models all the time. We are constantly making decisions based on them – everything from simple heuristics (nachos taste good), all the way up to (and including) quantum mechanics. They’re great tools for helping us understand and make sense of the world.

But it’s important to understand their limitations and the dangers of overconfidence they can create. There’s no such thing as a perfect model – they are simplifications of more complex systems.

Every (interesting) model is wrong. The trick is to understand where the model that you are using helps you understand the world, and where it breaks down.

One area where complex models are often used is finance. The financial markets hoover up all the data they can find and are constantly moving.

There simply is no way to meaningfully engage with the financial markets without putting a model in between them and you. You need the Matrix.

But you need to make sure that you are using the right models in the right way, and then interpreting the results correctly. Finance is complex enough – but understanding how to think about the hows and whys of the different models we’re using can help strip (some) of that away.

Today, I want to talk about two things:

  1. The tradeoffs that you have to keep in mind when looking at models (using the right models in the right way), and
  2. How we go about incorporating the latest research into portfolios (interpreting and acting on the results).

The Tradeoffs of Models

One of the problems of working with models is that it’s easy to incorporate imprecise inputs, either through user error or just plain misapplication of the model. In other words, garbage in, garbage out.

For instance, we operate within a framework called Modern Portfolio Theory. The basic idea is that when you’re building a portfolio you don’t care all that much about the individual risk and return characteristics of a particular asset.

Rather, you care about what that asset will do to the totality of the portfolio. It sounds pretty straightforward, but it has some pretty profound implications for building portfolios (i.e., diversification is a good thing).

But it’s also really easy to misuse. There are massive swaths of the financial services industry built on abusing this theory. And then when something goes wrong, people turn around and blame the model.

If you use a model incorrectly, it’s not the model’s fault. Unfortunately, academic papers don’t come with warning stickers.

When you’re looking at a model, you want to ask two questions:

  1. Is it testable? Does the model make any predictions that you can check?
  2. Does it tell us anything useful about the world?

If this looks like something you remember from science class, that’s because it is. We look at every model as a hypothesis.

Every model needs to be able to prove its usefulness. If it fails either of those tests, then it’s pretty safe to ignore it. If it’s not testable, there’s no way to know if it works or not. And if it doesn’t tell us anything useful… Well, there’s no need to worry whether it’s testable or not.

But, remember, every model is wrong. Even if the model passes both tests, you need to think about how you can use it.

For instance, we know Newtonian physics isn’t correct. It breaks down when you start working with really big or really small objects. But it’s taught in every high school physics class in America.

Is that because schools are intentionally teaching bad information? No, it’s because Newtonian physics are a really good model for understanding the world at the scale most people work in.

You can calculate everything from the orbits of the planets to the path of a football with incredible accuracy with Newtonian physics. Teaching “more precise” models wouldn’t be useful.

One, very few people can actually wrap their heads around quantum mechanics, and we’d like some people to pursue engineering, and two, it wouldn’t actually help the vast (vast) majority of people to understand the world around them.

The first step in getting a good answer is understanding the question and picking the right model to get the answer.

TYPE I VS. TYPE II ERROR

Even when we do pick the right models, though, we need to be careful. When we are looking at situations where we are weighing options and have to make a decision, there are almost never clear answers.

Whenever we have shades of grey, there is always the chance for a mistake. When you’re working with imperfect information, there’s no way to eliminate that possibility.

We’re always working in terms of probabilities – in noisy systems there’s always the chance that your results are just there because of random chance. So you need to prepare for it.

When you’re working with binary decisions, you can be wrong in two ways. You can say yes when you should have said no (a false positive), or you can say no when you should have said yes (a false negative). Part of using a model correctly is deciding which one you would prefer to make.

Every time the Food and Drug Administration assesses a new drug, they have a decision to make. If they don’t approve it, they are potentially saying no to a drug that could help people, even save lives. On the other hand, if they do approve it, they could be releasing a drug that does more harm than good if its benefits don’t outweigh the possible side effects.

That gives us a pretty good picture of false positives and negatives, or type I and type II errors.

Type I error, or a false positive, occurs the FDA approves a drug that either has no benefit or high risk of harmful side effects.

Type II error, or a false negative, would be if a beneficial drug failed to be approved.

If you minimize one of these errors, the chance of the other one grows. Someone with a currently untreatable illness is a lot more willing to accept type I errors than someone who just has a headache. There are no right answers, and finding the correct balance is hard.

What Does This Have to Do with Investments?

The stakes are obvious when we are talking about drug approvals, but it applies everywhere. For instance, when we evaluate whether to add a new asset class to our client’s portfolios, we’re making this exact same determination.

This tradeoff can be reframed in terms of type I and type II error.

A type I error would take place if we added something to our client’s portfolios with no expected net benefit. Basically, it’s the risk of implementing a bad idea.

Type II occurs if we decide against including something that would have a net benefit to the portfolio – the risk of not implementing a good idea.

Type I can be minimized by avoiding portfolio enhancements, which basically describes the traditional index fund approach. Type II is minimized by lowering the bar for new ideas – similar to a quant approach that uses many signals in hopes that the good ones will outweigh the bad.

If you can only minimize one of these, studies suggest it should be type I – implementing a bad idea. As we’ve said before, active managers struggle to beat benchmark indices, which tells us that performance-enhancing ideas are not that easy to come by.

But we don’t want to minimize one type and go all in on the other. We may be pretty conservative about what we want to add to client portfolios, but we’re always looking for what comes next. Every once in a while there are new findings that do have a place in your portfolio – and we want to make sure that we incorporate them as soon as we can. Everything that we’re doing in our portfolios was a new idea at one point – we just want to be sure that we’re only bringing in the good ideas.

Type I: Defending Against Bad Ideas and Unnecessarily High Costs

Probably the most common way you get type I errors is through something called data mining. The financial markets are wildly noisy, which means we’re always trying to tease out whether a result is actually real or just the result of random noise.

So we (along with everyone else who works with statistics) use a technique called “Hypothesis Testing” to, shockingly, test our hypotheses (as you can tell, statisticians are really creative with their names).

There are all sorts of different ways to do hypothesis testing, but eventually, they all come back with a probability telling you how likely it is that your result is different from what you’re testing against. But it’s a probability, so there’s always going to be uncertainty.

To give you an idea of just what this means, one standard cutoff people use is that a result has to be “significant at the 5% level.” This means the result has to have less than a 5% chance of just being noise.

Well, at this level, you’re saying that you’re comfortable with one out of every twenty of your accepted results being wrong just by chance.

That means we can’t just rely on the results of a model alone when we decide what we want in a portfolio. We need to actually understand why something acts the way it does. We need a risk story.

Why should one group of stocks have higher returns than another? Does this actually make sense?

It makes sense that stocks should be riskier – and have higher returns to compensate investors for that risk – than bonds. It makes sense that small stocks should be riskier, and have higher returns to compensate investors for that risk, than large cap stocks.

If we were just ran the numbers, we could find all sorts of interesting relationships in the data that yielded massive returns in the past. But that’s the past – it doesn’t tell us anything about what will happen going forward.

And if we’re trying to sift through the data to find the right relationships, we’re probably running a whole bunch of tests. Remember, just by chance, one out of every twenty tests will look like there’s something there.

That’s data mining. It can be incredibly difficult to root out. Even if you’re watching out for it, it’s always lurking in the background.

Unless we have some reason to believe a relationship will continue into the future – unless there is some underlying source of risk that the market is paying you to take – then we can’t [glossary_exclude]trust[/glossary_exclude] it.

When we are deciding what to include in our portfolios, we’re looking for risks with positive expected returns that are sensible, persistent, pervasive, robust, and cost-effective to pursue in well-diversified portfolios. If something can’t rise to that level, we aren’t going to include it in the factors we build our portfolios around.

What Are We Missing (Type II Error, False Negatives)?

Again, type II error occurs when we let a good idea pass us by in the interest of caution. But how beneficial are these foregone benefits?

We already know that good ideas are incredibly hard to come by (based on the diminishing returns of active managers). This means we need to be more cautious about avoiding bad ideas than missing out on good ideas.

Good ideas also have diminishing marginal value. We already incorporate a number of risk factors into our portfolios, so the benefit of adding another factor will be blunted.

It’s just like cookies – the first cookie tastes really good, but the seventh…

That doesn’t mean we should ignore good ideas, but we need to be careful – that eighth cookie needs to look really good for it to be worth taking the chance on it. We’re always looking into new strategy enhancements, and we strive to extract every basis point of value-add and cost savings for our clients.

But it can’t end there. Making changes to portfolios can be detrimental, so it’s important to evaluate new research with a skeptical eye. Yes, optimists are great to have around, but not when you’re considering new strategies.

Keep the bar high when it comes to perceived enhancements. The marginal benefit could be relatively small while the probability of harmful consequences thanks to type I errors increases.

Investor Discipline Depends on the Following…

In order to stick to the plan for the long term, you need to be comfortable with your investment portfolio. While your investments are only part of your entire retirement income plan, they are an important part – and they are the most nerve-wracking part of it for most investors.

To have the discipline that will help you have a great long-term investment experience, you need to have confidence that your investment portfolio is where it needs to be.

The markets are going to do what they are going to do. They are going to go up, and they are going to go down. If you aren’t confident in your investment process, it will be incredibly difficult to keep from trying to “fix” your portfolio based on what is going on.

If you don’t have a model for how you think about the markets and how to build a portfolio, then you will likely be whipsawed around by the financial media. But if you have a solid model to guide you then you will be able to stand firm and weather whatever the markets throw your way.

If you want to learn more about being an informed investor, check out our ebook

McLean Asset Management Corporation (MAMC) is a SEC registered investment adviser. The content of this publication reflects the views of McLean Asset Management Corporation (MAMC) and sources deemed by MAMC to be reliable. There are many different interpretations of investment statistics and many different ideas about how to best use them. Past performance is not indicative of future performance. The information provided is for educational purposes only and does not constitute an offer to sell or a solicitation of an offer to buy or sell securities. There are no warranties, expressed or implied, as to accuracy, completeness, or results obtained from any information on this presentation. Indexes are not available for direct investment. All investments involve risk.

The information throughout this presentation, whether stock quotes, charts, articles, or any other statements regarding market or other financial information, is obtained from sources which we, and our suppliers believe to be reliable, but we do not warrant or guarantee the timeliness or accuracy of this information. Neither our information providers nor we shall be liable for any errors or inaccuracies, regardless of cause, or the lack of timeliness of, or for any delay or interruption in the transmission there of to the user. MAMC only transacts business in states where it is properly registered, or excluded or exempted from registration requirements. It does not provide tax, legal, or accounting advice. The information contained in this presentation does not take into account your particular investment objectives, financial situation, or needs, and you should, in considering this material, discuss your individual circumstances with professionals in those areas before making any decisions.

McLean Asset Management