By Ashby Monk and Dane Rook, Stanford Global Projects Center
To succeed at investing, you must master time travel.[i] You need to be proficient at beaming yourself into a hypothetical future, and then analyzing the best route to get “there” from the actual “now.” And the route itself is crucial: arriving safely at your target destination can matter just as much as reaching it quickly.
This analytical time-hopping isn’t very tricky if the imagined future looks enough like the past, and if that future isn’t too distant (excluding wormholes and vortexes). But that isn’t the sort of time travel required for most long-term investors (LTIs), such as pension funds, endowments, and sovereign wealth funds. Rather, the possible futures to which LTIs must navigate are usually far off, and may bear seemingly little resemblance to the past or present. And the further ahead these possible futures are, the more of them there are to explore.
Short-term investors mostly avoid these extra complications, but also forgo the sizable opportunities that go with them. Navigating to more distant futures offers LTIs a greater diversity of paths to reach them, which can be an enormous advantage with the right tools for wayfinding. Yet, historically, the tools at LTIs’ disposal haven’t been up to the job. Improving LTIs’ time travelling capabilities, and empowering them to reap more of the inherent advantages from being long-term-oriented, will require upgrading their ‘navigational technology’ (nav-tech). Simulation is one such prime candidate for an upgrade.
This article series investigates how LTIs can use more advanced simulation technologies to succeed over increasingly long horizons. The current article tackles the issue at a high level: it looks at the basics of why existing simulation toolkits fall short in long-term investing, and spells out the main opportunities for overcoming these shortfalls with emerging tech. Subsequent articles will delve into these topics in finer detail. Throughout the series, we’ll be introducing ideas from a new paradigm in long-term investment that we’ve been developing both as software at RCI and research at Stanford: Portfolio Navigation.
From Simulation to Navigation
Computer-aided simulation has featured in financial analysis for many decades, and can now be performed cheaply and easily: for example, simple tools for building and running simulations come built-in (or as convenient plug-ins) to most spreadsheet software. The chief use for these tools is in exploring future investment outcomes - both intermediate and final - in terms of their associated payoffs and probabilities, by tracing feasible paths through time (...time travel!). Any tool that can perform this essential function might be called ‘simulation technology’.[ii] In this sense, most LTIs use simulation technology, whether purchased as off-the-shelf packages (like STATA or Crystal Ball) or built in-house as customized programs (e.g., with help from statistics-friendly programming languages, such as R, Python, or Julia).
Most of the commonplace simulation technologies used by LTIs are what can be termed user-limited, because they are substantially:
- User-defined: they rely heavily (often exclusively) on data and assumptions that are provided explicitly by the people (whether individuals or teams) who construct and run the simulations; and
- User-translated: they rely predominantly on users to interpret the outputs of simulations, and to translate these outputs into actions (whether for further analysis or implementation as live strategies).[iii]
These user-limited simulations may be sufficient in some instances, e.g., when: 1) relatively few assumptions are needed (especially when the simulation’s outputs are not particularly sensitive to them, and/or assumptions can be straightforwardly and confidently made); 2) the future can be reasonably approximated from simple datasets; and 3) judging optimal actions from simulation outputs requires uncomplicated calculations or judgment. But These situations don’t regularly apply to LTIs when they’re analyzing long-horizon portfolios. Instead, LTIs are typically (and increasingly) forced to grapple with the need to:
- Generate complicated sets of assumptions about complex phenomena, and base these assumptions on inadequate datasets or unproven theories
- Use datasets that imperfectly approximate the future (with poor knowledge of exactly how future data will differ from past data), or that are non-standard in key ways (for example, alternative data, such as social-media or geolocation data)[iv]
- Craft dynamic portfolio-management strategies that balance multiple competing objectives (e.g., liquidity, costliness, volatility)
In short, LTIs’ needs are poorly served by user-limited simulation tools, which badly clips their wings as long-haul time travelers.
Simulation tools that are user-limited are a bit like coloring books: the simulation engine does the shading-in, but it’s up to the user to provide all of the constraining outlines and judge the quality of the final picture. A more fit-for-purpose simulation capability would instead operate like modern navigation tools, and be built around a search-and-suggest framework. For example, Google Maps, Waze, and Apple Maps are all essentially simulation engines that accept some constraints from the user (departure and destination points), and then perform a search over simulated journeys to find ideal sets of directions, as well as suggest useful considerations to users (e.g., propose routes that stick to surface streets, recommend food or fuel stops along the way, alert the user about toll roads).
We see the search-and-suggest paradigm, and the emerging technologies that can enable it, as a prescription for freeing LTIs from user-limited simulation. For convenience, we’ll refer to these technologies as ‘nav-tech’, and they are fundamental within the novel approach to portfolio management that we’re developing: Portfolio Navigation. The end-goal for Portfolio Navigation is to give investors a way to plan their portfolios much in the same way as they plan trips using Google Maps. That dream is now within reach. Below, we discuss some of the nav-tech that we feel will be most influential for Portfolio Navigation, and frame that discussion around the main improvements that they can enable for long-term simulation and investing.
Nav-Tech Capabilities in Long-Term Investing
To make simulation less user-limited, the nav-tech that supports it should:
- Enable generative approaches to assumptions and data, whereby these are extracted, processed, and pre-populated by adaptive software, rather than being explicitly or exclusively supplied by users (of course, users should be able to override such ‘smart defaults’ if they choose to do so)
- Make suggestion be the cornerstone for interpretation and actionalization of simulations - i.e., relevant nav-tech should have deep abilities in ‘suggesting’ both ways to think about the outputs (including further ways to analyze them) and ways that the outputs may be implemented strategically (whether that’s specific portfolio compositions, link-ups with subject-matter experts, or auto-generated diligence lists for external asset managers).
From our longstanding research project in investment-technology, we’re convinced that these twin aims can be met with a combination of nav-tech solutions that achieve three functional capabilities: augmentation, coordination, and recommendation. Let’s now cover each of these capabilities in turn.
Augmentation
A major trend over the last half-decade in finance (and other quantitative fields) is the explosion of analytical technologies to untangle complex relationships - most notably, machine-learning techniques driven by deep neural networks. These types of complexity-tolerant tools (which also include agent-based models and other adaptive algorithms) have many potential roles to play in augmenting simulation: from helping generate more realistic artificial datasets; to pinpointing high-dimensional relationships that are relevant for simulations, but are near-impossible to express in closed-form mathematical expressions. These novel algorithmic power-tools can’t be practically implemented in spreadsheets. As such, spreadsheet-based simulation platforms will become increasingly disadvantaged as novel, whizbang uses are found for these analytical algorithms in simulating long-term investment strategies.
Likewise, the disadvantages of spreadsheet-rooted simulations will further deepen with the continued surge in unconventional datasets, e.g., datasets from prediction markets and other sources of so-called alternative data. These datasets are often large in two dimensions: the sheer number of entries they contain, and the diversity of metadata fields associated with each entry. Most spreadsheet software can only comfortably handle a few million entries (and it tends to slow down catastrophically at that level). But these datasets are routinely several orders of magnitude larger! Moreover, these datasets become most valuable to investors when they are integrated together. Augmenting simulations with these unconventional datasets can give a huge boost the richness of long-term simulation (e.g., by clarifying what external factors outcomes of the simulation are most sensitive to). But a clear takeaway is that LTIs will be unable to take full advantage of this dialed-up power if they remain handcuffed to low-capacity simulation platforms (like those based on spreadsheets).
Coordination
A somewhat obvious way to make simulations less user-limited is to increase the number of informed users who are involved in building, running, and interpreting the simulation. A universal finding from our research into investment technology over the past five years is that simulation tends to be highly asocial activity within long-term investment organizations: the whole of building, running, and interpreting simulations is performed by a handful of specialists (often only one person!); and these teams often don’t engage in any heavy interaction with other teams across the organization during the process of simulation development and analysis. This asocial tendency has (at least) two negative consequences. First, it means simulations are often more impoverished than they need be - they don’t fully tap the deep wells of knowledge that may exist across the organization.
Second, it means the feedback cycles on simulation can be cripplingly slow. For example, simulations are often performed to inform risk committees and Boards of Directors, who make decisions based on them but aren’t often involved in building or running them. These higher-ups may get hold of the results of a simulation, and then ask ‘what-ifs’ about specific assumptions or data inputs. They boot these queries back to the simulation team, who then must identify and fiddle with the relevant inputs (or intermediate constraints/dynamics), before sending the outputs back to the higher-ups (who may only convene irregularly). Understandably, this process can be slow and buggy.
But the builders and keepers of simulations shouldn’t be faulted for this arduousness. Most existing simulation platforms don’t lend themselves to coordinated development and operation of simulations: the a-socialness is an innate design flaw. Anyone who’s ever tried to collaborate on an Excel Workbook with multiple people can attest to this (and Google Sheets is only marginally better). But this hindrance shouldn’t persist, as there’s been a recent proliferation of software tools for co-working on scenario-based analysis. Utilizing these new tools could allow long-term simulations to more extensively leverage:
- Version control: simulations are often developed incrementally, and the need to revert to some earlier configuration is commonplace. Modern collaboration technologies prioritize easy version comparison, and the ability to return to early versions without the hassle of back-tracking, or having to email around various versions of a spreadsheet simulator and ensure that colleagues are working from the (identically) same version.
- Knowledge management: as noted above, current approaches to simulation make it hard to gather and synchronize expertise from across the organization when designing assumptions, assembling the relevant datasets, and analyzing simulation outputs. Recent improvements to technology that facilitates knowledge management (e.g., by mapping out graphs for where expertise and information are stored within people and documents inside the organization) could substantially improve the quality long-term simulations by making it more convenient to find and incorporate diffuse internal knowledge into them.
- External collaboration: sometimes the ideal data and knowledge for improving simulations doesn’t reside inside the organization itself, but instead comes from other organizations - like external asset managers and consultants. For many LTIs, the process of collecting and collating inputs (data, assumptions) from these entities (of which there can be many) and manually inputting them is time-intensive and error-prone. This flawed procedure can be sidestepped by better collaboration software, which can automate the process of harvesting inputs from relevant external partners. Such systems could even facilitate sharing of assumptions and data between peer LTIs.
Recommendation
The whole purpose of simulations is to help in the “search” for the proper courses of action. But, in most cases, these searches are a far cry from the semantic recommendation engines behind Google and Facebook, which infer what a searcher is seeking and make suggestions related to it (sometimes in non-obvious ways). Few out-of-the-box (or even bespoke) simulation toolkits today actively recommend investment strategies to users - they generally lack sophisticated optimization capabilities, and just reflect a link in the analytical chain rather than being a complete decision-making solution. Next-generation nav-tech can help overcome this, but - and possibly more importantly - the recommendations it furnishes need not be limited to strategy or portfolio design. Recommendations could also include identification of relevant in-house or external experts to improve the simulation (from the knowledge management capabilities noted above), or point to datasets that could augment the simulation. Thrilling new technologies like Diffbot and Facebook’s Retrieval-Augmented-Generation model are able to pinpoint and analyze documents and datasets on the web (and, feasibly, internal data stores) that could plug gaps in simulation models. Overall, the recommendation abilities of emerging nav-tech could prove to be rocketfuel for making the whole simulation process more streamlined and comprehensive.
Time Traveling to...Next Time
It should by now be apparent that improving simulation is essential to helping LTIs improve their skills in time travel, and sharpening their understanding of their strategies, their organizations, and the future of markets in which they operate. In our upcoming investigations, we’ll discuss these improvements in more intricate detail. Until then, safe travels!
AUTHOR BIOGAPHIES
Ashby Monk, PhD
Ashby Monk is the Executive and Research Director of the Stanford Global Project Center, and was named by CIO Magazine as one of the most influential academics on institutional investing. He is also a member of the Future of Finance Council at the CFA Institute. Dr. Monk is also the co-founder of Long Game Savings. He holds a Doctorate in Economic Geography from Oxford University, a Master’s in International Economics from the Universite de Paris I – Pantheon Sorbonne and a Bachelor’s in Economics from Princeton University.
Dane Rook, PhD
Dane Rook is a Research Engineer at Stanford University’s School of Engineering, where he explores the intersection of machine intelligence and long-term investing. He was previously a researcher at Kensho, a successful AI-startup, and J.P. Morgan. Dr. Rook earned his Doctorate from the University of Oxford as a Clarendon Scholar. He also holds degrees from the University of Cambridge and the University of Michigan. Dr. Rook is an advisor to technology startups in both the U.S. and Europe.
[i] By “successful,” we do not mean just occasionally being lucky. Rather, we mean reliably meeting (and regularly beating) reasonably high expectations for investment performance.
[ii] When speaking about investment simulations, many financial professionals intend tools or analysis that involve use of randomness (e.g., via random-number engines), so that some aspects of future possibilities are non-deterministic.
[iii] Many off-the-shelf simulation toolkits do provide (semi-)automated graphical outputs that help with intuitions in interpreting outputs. Despite these conveniences, such toolkits do little to explain the differences between courses of action or recommend (either intermediate or overall) actions.
[iv] Note: the need to use datasets that are suspected of poorly reflecting the future arises when the future is expected to look significantly different from the past, but where few (or no) better substitute datasets exist. In such cases, conventional market datasets (e.g., prices, trading volumes) may no longer be adequate on their own, and there arises the need to turn to alternative datasets for better predictive ability.