The battle between equally weighted portfolios and optimized portfolios is one that has produced an inaccurate assumed victor. Previous research has shown that equally weighted portfolios outperform optimized portfolios, suggesting that optimization adds no value in the absence of informed inputs and resulting in a naïve distrust of the portfolio optimization process. We, however, challenge both this suggestion and the research that lead to it. Optimized portfolios are designed to maximize expected return for a chosen level of risk by accounting for differences in expected returns, standard deviations, and correlations. The 1/N approach to investing assigns asset weights based purely on the number of asset classes, N, and ignores all other information. We understand why the concept of 1/N is so appealing:
- It avoids concentrated positions.
- It never underperforms the worst performing asset.
- It always invest in the best performing asset.
- It captures the size alpha because it overweights small-cap stocks and underweights large-cap stocks.
In 2009, a study by DeMiguel, Garlappi, and Uppal (DGU) was published that seemed to provide definitive proof that equally weighted portfolios outperformed optimized portfolios. They performed out-of-sample backtests on seven empirical data sets, and used 14 different methods to estimate inputs, including Bayesian estimation and moment restrictions designed to reduce estimation error. They found that, on average, 1/N portfolios generated Sharpe ratios that are 50% higher than those of the mean-variance optimized portfolios.
We believe that the perceived failure of optimization arises from overreliance on short term return samples to estimate expected returns. The DGU study used rolling 60- and 120- month return samples in modeling expected returns, which are prone to small-sample error and in many cases are implausible. No thoughtful investor would blindly extrapolate historical means estimated over such short samples.
Think of optimization as a sophisticated navigation system. 1/N is the navigational equivalent of wandering aimlessly in search of our destination, ignoring not only the GPS but also any posted road signs. We argue that it is better to use the GPS; we just need to specify the destination. If our goal is to drive to the beach, but we instruct the GPS to take us to the office, we should not fault the GPS for directing us to the office. Those who favor 1/N think of optimization this way.1
For our study, we discounted the notion that error maximization explains the finding that 1/N produces superior portfolios than optimization. We used simple models of expected returns that assumed no forecasting skill, and used no constraints other than the long-only constraint. With 13 datasets comprising 1,028 data series, we constructed over 50,000 optimized portfolios and evaluated their out-of-sample performance. We grouped them into three categories: asset class, beta, and alpha. These categories correspond to the investment process that most institutional investors follow: first, asset/liability management; then, beta allocation; and finally, the search for alpha. Table 1 shows our datasets. We used monthly data except for the 500 stocks, for which we used daily data to accommodate the shorter period and the larger covariance matrix.
For each dataset, we compared the out-of-sample performance of the market portfolio, the 1/N portfolio, and the optimized portfolios. To measure the performance of the optimized portfolios, we forecasted risk and return only on the basis of information available at the time of portfolio construction—making every forecast out of sample. Then, we invested in the optimal allocation that maximized the trade-off between expected return and risk.
For the asset/liability simulations, we assumed five-year holding periods with annual rebalancing and measured average performance for all five-year holding periods (we reoptimized monthly for all other simulations). We used the Sharpe ratio as a measure of performance, and did not impose any constraints (other than the long-constraint). This was for two reasons: (1) We wanted our results to be directly comparable to previous studies, and (2) we wanted to evaluate the performance of optimization in its simplest form.
We used three approaches to estimate expected returns. Although incredibly simple, these expected returns have an important difference from most of the expected returns used in previous studies: they do not rely on short rolling samples of realized returns, which often imply implausible expectations.
- First, we generated the minimum-variance portfolio. We designed this to determine whether we could improve risk-adjusted performance by simply optimizing to reduce risk on the basis of pure extrapolation of the covariance matrix. In this experiment, expected returns were constant for all assets.
- Second, for each asset, we estimated a risk premium over a long data sample and assumed that it remained constant throughout the backtest. To do so, we used data available before the backtest start date. Table 2 shows our assumptions for the asset/liability optimizations. For the betas, we simply used the first 50 years of each database.
- Third, in the spirit of classical statistics, we used a growing sample that included all available out-of-sample data.
To estimate expected volatilities and correlations, we used the monthly rolling 5-, 10-, and 20-year covariance matrices, as well as the all-data approach (all matrices were equally weighted).
Table 3 provides a summary of our experiments. Due to a lack of available data, we focused on the minimum-variance approach and the five-year covariance matrix for the alpha portfolios; given the large number of securities involved, we used a daily covariance matrix for the security selection experiment.
Figure 1 shows the results for our asset/liability management optimizations. We included in-sample optimization results, which show how we would perform if we knew the true parameters of the distribution. Figure 1 shows that optimized portfolios significantly outperform the 1/N portfolio. We found that even without any ability to forecast returns, optimization of the covariance matrix alone adds value.
Figure 2 shows the results for our beta universes. For these backtests, we averaged Sharpe ratios across beta universes. The results in DGU (2009) were based on this dataset, which is notorious for the exceptional performance of the 1/N portfolio, as evidenced by its high Sharpe ratio as compared with that of the market portfolio. Nonetheless, the optimized portfolios outperformed 1/N. Since we only report averages, we note that in some cases (e.g., when allocating among size deciles), 1/N outperformed optimization. But out-of-sample results are often noisy; hence, we are mostly interested in the average performance across backtests.
Figure 3 shows the results for our alpha universes. We assumed that the market portfolios for hedge funds and asset managers were equally weighted, and used the S&P 500 as the market portfolio for security selection and the S&P GSCI as the market portfolio for commodities. Optimization of the covariance matrix, without estimates of expected return, once again significantly improved out-of-sample performance as compared with the 1/N portfolio.
The minimum-variance portfolio performed well in our asset class, beta, and alpha simulations. In some cases, it outperformed optimization with expected return. This is plausible for a few reasons. First, our expected return models did not assume any forecasting skill. Second, introducing expected returns does not necessarily increase the Sharpe ratio of high-return-high-risk portfolios when leverage is not allowed.
By performing this study we were able to conclude that optimized portfolios generate superior out-of-sample performance compared with equally weighted portfolios. We showed that reliance on small historical samples for estimating returns often leads to views that would be rejected by any savvy investor. Although in this study we focused on mean-variance optimization, new technology in portfolio construction allows for increased flexibility and provides greater insight into portfolio behavior and performance.
We would also like to point out some practical problems with 1/N beyond those highlighted in our experiment.
- Equal weighting is not sensitive to return and risk estimates, but is entirely dependent on the choice of the asset class universe. Each asset class is assigned equal importance regardless of how many there are. The 1/N approach essentially transfers the risk of input estimation error to the risk of selecting the right asset classes.
- The 1/N heuristic offers only one portfolio, no matter their attitude towards risk. The efficient frontier allows investors the choice of a wide variety of portfolios, each catered towards different appetites for risk.
- Equal weighting ignores the capacity of each asset class as well as a variety of other considerations, which might favor one asset class over another.
Essentially, 1/N makes sense only for investors who believe they have no insight into the expected returns and risk of asset classes. Investors who have access to historical data, and who are capable of sound judgement to apply optimization, should identify the portfolio that best suits their investment goals.
Stay up-to-date with Windham! Subscribe to Windham Insights.
Kritzman, Mark P. “A Practitioner’s Guide to Asset Allocation.” A Practitioner’s Guide to Asset Allocation, John Wiley & Sons, Inc., 2017, pp. 61–61.